Test Report: Docker_Linux_containerd 21978

                    
                      c78c82fa8bc5e05550c6fccb0bebb9cb966c725e:2025-11-24:42489
                    
                

Test fail (4/420)

Order failed test Duration
406 TestStartStop/group/old-k8s-version/serial/DeployApp 13.41
407 TestStartStop/group/no-preload/serial/DeployApp 13.47
420 TestStartStop/group/embed-certs/serial/DeployApp 14.38
440 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.28
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-128377 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bfaec734-d874-4dcb-b31f-feb87adccfca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bfaec734-d874-4dcb-b31f-feb87adccfca] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003838321s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-128377 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-128377
helpers_test.go:243: (dbg) docker inspect old-k8s-version-128377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7",
	        "Created": "2025-11-24T09:04:51.081869704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696955,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:04:51.124349133Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7/hosts",
	        "LogPath": "/var/lib/docker/containers/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7-json.log",
	        "Name": "/old-k8s-version-128377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-128377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-128377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7",
	                "LowerDir": "/var/lib/docker/overlay2/1b1691990697dca2c1039c44453446d25814644b5c2e14c7ed7f94a719a51d83-init/diff:/var/lib/docker/overlay2/a062700147ad5d1f8f2a68f70ba6ad34ea6495dd365bcb265ab17ea27961837b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1b1691990697dca2c1039c44453446d25814644b5c2e14c7ed7f94a719a51d83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1b1691990697dca2c1039c44453446d25814644b5c2e14c7ed7f94a719a51d83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1b1691990697dca2c1039c44453446d25814644b5c2e14c7ed7f94a719a51d83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-128377",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-128377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-128377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-128377",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-128377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1b825735b854737d663311b12a71789ec27a2117f701b1d752b938a4e9f325be",
	            "SandboxKey": "/var/run/docker/netns/1b825735b854",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-128377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5e2ac3220d9f4f0222496592b8e5141116283ec11109477dec7a51401ec91c02",
	                    "EndpointID": "4ad14cff7e04c8fe264f407478b59f88dc3ab8d1c7ab17924a24adb832eca462",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:3f:51:5a:9c:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-128377",
	                        "2f10becef587"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-128377 -n old-k8s-version-128377
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-128377 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-128377 logs -n 25: (1.058474478s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-203355 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                              │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                              │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p missing-upgrade-058813                                                                                                                                                                                                                           │ missing-upgrade-058813 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ ssh     │ -p cilium-203355 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo docker system info                                                                                                                                                                                                            │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo containerd config dump                                                                                                                                                                                                        │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo crio config                                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p cilium-203355                                                                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ start   │ -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ start   │ -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:04:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:04:47.686335  696018 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:04:47.686445  696018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:47.686456  696018 out.go:374] Setting ErrFile to fd 2...
	I1124 09:04:47.686474  696018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:47.686683  696018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:04:47.687133  696018 out.go:368] Setting JSON to false
	I1124 09:04:47.688408  696018 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13624,"bootTime":1763961464,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:04:47.688532  696018 start.go:143] virtualization: kvm guest
	I1124 09:04:47.690354  696018 out.go:179] * [no-preload-820576] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:04:47.691472  696018 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:04:47.691501  696018 notify.go:221] Checking for updates...
	I1124 09:04:47.693590  696018 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:04:47.694681  696018 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:04:47.695683  696018 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:04:47.697109  696018 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:04:47.698248  696018 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:04:47.699807  696018 config.go:182] Loaded profile config "cert-expiration-869306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:04:47.699947  696018 config.go:182] Loaded profile config "kubernetes-upgrade-521313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:47.700091  696018 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:04:47.700236  696018 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:04:47.724639  696018 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:04:47.724770  696018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:47.791833  696018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 09:04:47.780432821 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:47.791998  696018 docker.go:319] overlay module found
	I1124 09:04:47.794089  696018 out.go:179] * Using the docker driver based on user configuration
	I1124 09:04:47.795621  696018 start.go:309] selected driver: docker
	I1124 09:04:47.795639  696018 start.go:927] validating driver "docker" against <nil>
	I1124 09:04:47.795651  696018 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:04:47.796325  696018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:47.859511  696018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 09:04:47.848833175 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:47.859748  696018 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:04:47.859957  696018 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:04:47.861778  696018 out.go:179] * Using Docker driver with root privileges
	I1124 09:04:47.862632  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:04:47.862696  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:04:47.862708  696018 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:04:47.862775  696018 start.go:353] cluster config:
	{Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:04:47.863875  696018 out.go:179] * Starting "no-preload-820576" primary control-plane node in "no-preload-820576" cluster
	I1124 09:04:47.864812  696018 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 09:04:47.865865  696018 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:04:47.866835  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:04:47.866921  696018 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:04:47.866958  696018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:04:47.867001  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json: {Name:mk04f43d651118a00ac1be32029cffb149669d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:47.867208  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:47.890231  696018 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:04:47.890260  696018 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:04:47.890281  696018 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:04:47.890321  696018 start.go:360] acquireMachinesLock for no-preload-820576: {Name:mk6b6fb581999217c645edacaa9c18971e97964f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:47.890432  696018 start.go:364] duration metric: took 88.402µs to acquireMachinesLock for "no-preload-820576"
	I1124 09:04:47.890474  696018 start.go:93] Provisioning new machine with config: &{Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:04:47.890567  696018 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:04:48.739369  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:40906->192.168.76.2:8443: read: connection reset by peer
	I1124 09:04:48.739430  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:48.740184  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:48.920539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:48.921019  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:49.420530  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:49.420996  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:46.813535  695520 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:04:46.813778  695520 start.go:159] libmachine.API.Create for "old-k8s-version-128377" (driver="docker")
	I1124 09:04:46.813816  695520 client.go:173] LocalClient.Create starting
	I1124 09:04:46.813892  695520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:04:46.813936  695520 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:46.813967  695520 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:46.814043  695520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:04:46.814076  695520 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:46.814095  695520 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:46.814441  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:04:46.831913  695520 cli_runner.go:211] docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:04:46.831996  695520 network_create.go:284] running [docker network inspect old-k8s-version-128377] to gather additional debugging logs...
	I1124 09:04:46.832018  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377
	W1124 09:04:46.848875  695520 cli_runner.go:211] docker network inspect old-k8s-version-128377 returned with exit code 1
	I1124 09:04:46.848912  695520 network_create.go:287] error running [docker network inspect old-k8s-version-128377]: docker network inspect old-k8s-version-128377: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-128377 not found
	I1124 09:04:46.848928  695520 network_create.go:289] output of [docker network inspect old-k8s-version-128377]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-128377 not found
	
	** /stderr **
	I1124 09:04:46.849044  695520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:46.866840  695520 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:04:46.867443  695520 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:04:46.868124  695520 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:04:46.868877  695520 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:04:46.869272  695520 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9bf62793deff IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0a:d1:a9:3b:89:29} reservation:<nil>}
	I1124 09:04:46.869983  695520 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5fa0f78c53ad IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:9e:96:d6:0a:fe:a6} reservation:<nil>}
	I1124 09:04:46.870809  695520 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e158e0}
	I1124 09:04:46.870832  695520 network_create.go:124] attempt to create docker network old-k8s-version-128377 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 09:04:46.870880  695520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-128377 old-k8s-version-128377
	I1124 09:04:46.993201  695520 network_create.go:108] docker network old-k8s-version-128377 192.168.103.0/24 created
	I1124 09:04:46.993243  695520 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-128377" container
	I1124 09:04:46.993321  695520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:04:47.015308  695520 cli_runner.go:164] Run: docker volume create old-k8s-version-128377 --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:04:47.034791  695520 oci.go:103] Successfully created a docker volume old-k8s-version-128377
	I1124 09:04:47.034869  695520 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-128377-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --entrypoint /usr/bin/test -v old-k8s-version-128377:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:04:47.772927  695520 oci.go:107] Successfully prepared a docker volume old-k8s-version-128377
	I1124 09:04:47.773023  695520 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:04:47.773041  695520 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:04:47.773133  695520 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-128377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:04:50.987600  695520 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-128377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.214396647s)
	I1124 09:04:50.987639  695520 kic.go:203] duration metric: took 3.214593361s to extract preloaded images to volume ...
	W1124 09:04:50.987789  695520 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:04:50.987849  695520 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:04:50.987920  695520 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:04:51.061728  695520 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-128377 --name old-k8s-version-128377 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-128377 --network old-k8s-version-128377 --ip 192.168.103.2 --volume old-k8s-version-128377:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:04:51.401514  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Running}}
	I1124 09:04:51.426748  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.456228  695520 cli_runner.go:164] Run: docker exec old-k8s-version-128377 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:04:51.515517  695520 oci.go:144] the created container "old-k8s-version-128377" has a running status.
	I1124 09:04:51.515571  695520 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa...
	I1124 09:04:47.893309  696018 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:04:47.893645  696018 start.go:159] libmachine.API.Create for "no-preload-820576" (driver="docker")
	I1124 09:04:47.893687  696018 client.go:173] LocalClient.Create starting
	I1124 09:04:47.893789  696018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:04:47.893833  696018 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:47.893861  696018 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:47.893953  696018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:04:47.893982  696018 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:47.893999  696018 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:47.894436  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:04:47.915789  696018 cli_runner.go:211] docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:04:47.915886  696018 network_create.go:284] running [docker network inspect no-preload-820576] to gather additional debugging logs...
	I1124 09:04:47.915925  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576
	W1124 09:04:47.939725  696018 cli_runner.go:211] docker network inspect no-preload-820576 returned with exit code 1
	I1124 09:04:47.939760  696018 network_create.go:287] error running [docker network inspect no-preload-820576]: docker network inspect no-preload-820576: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-820576 not found
	I1124 09:04:47.939788  696018 network_create.go:289] output of [docker network inspect no-preload-820576]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-820576 not found
	
	** /stderr **
	I1124 09:04:47.939956  696018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:47.960368  696018 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:04:47.961456  696018 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:04:47.962397  696018 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:04:47.963597  696018 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:04:47.964832  696018 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e9cf50}
	I1124 09:04:47.964868  696018 network_create.go:124] attempt to create docker network no-preload-820576 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 09:04:47.964929  696018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-820576 no-preload-820576
	I1124 09:04:48.017684  696018 network_create.go:108] docker network no-preload-820576 192.168.85.0/24 created
	I1124 09:04:48.017725  696018 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-820576" container
	I1124 09:04:48.017804  696018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:04:48.037793  696018 cli_runner.go:164] Run: docker volume create no-preload-820576 --label name.minikube.sigs.k8s.io=no-preload-820576 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:04:48.057638  696018 oci.go:103] Successfully created a docker volume no-preload-820576
	I1124 09:04:48.057738  696018 cli_runner.go:164] Run: docker run --rm --name no-preload-820576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --entrypoint /usr/bin/test -v no-preload-820576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:04:48.192090  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:48.509962  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:48.827547  696018 cache.go:107] acquiring lock: {Name:mkbcabeb5a23ff077ffdad64c71e9fe699d94040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827544  696018 cache.go:107] acquiring lock: {Name:mk92c82896924ab47423467b25ccd98ee4128baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827656  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:04:48.827672  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:04:48.827672  696018 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 138.757µs
	I1124 09:04:48.827689  696018 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:04:48.827683  696018 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 176.678µs
	I1124 09:04:48.827708  696018 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827708  696018 cache.go:107] acquiring lock: {Name:mkf3a006b133f81ed32779d427a8d0a9b25f9000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827735  696018 cache.go:107] acquiring lock: {Name:mkd74819cb24442927f7fb2cffd47478de40e14c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827766  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:04:48.827773  696018 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 69.196µs
	I1124 09:04:48.827780  696018 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:04:48.827788  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:04:48.827796  696018 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 65.204µs
	I1124 09:04:48.827804  696018 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:04:48.827791  696018 cache.go:107] acquiring lock: {Name:mk6b573bbd33cfc3c3f77668030fb064598572fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827820  696018 cache.go:107] acquiring lock: {Name:mk7f052905284f586f4f1cf24b8c34cc48e0b85b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827866  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:04:48.827873  696018 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 57.027µs
	I1124 09:04:48.827882  696018 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827796  696018 cache.go:107] acquiring lock: {Name:mk1d635b72f6d026600360916178f900a450350e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827887  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:04:48.827900  696018 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 115.907µs
	I1124 09:04:48.827910  696018 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:04:48.827914  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:04:48.827921  696018 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 128.45µs
	I1124 09:04:48.827937  696018 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827719  696018 cache.go:107] acquiring lock: {Name:mk8023690ce5b18d9a1789b2f878bf92c1381799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.828021  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:04:48.828033  696018 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 327.502µs
	I1124 09:04:48.828051  696018 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:04:48.828067  696018 cache.go:87] Successfully saved all images to host disk.
	I1124 09:04:50.353018  696018 cli_runner.go:217] Completed: docker run --rm --name no-preload-820576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --entrypoint /usr/bin/test -v no-preload-820576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.295229864s)
	I1124 09:04:50.353061  696018 oci.go:107] Successfully prepared a docker volume no-preload-820576
	I1124 09:04:50.353130  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1124 09:04:50.353205  696018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:04:50.353233  696018 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:04:50.353275  696018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:04:50.412447  696018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-820576 --name no-preload-820576 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-820576 --network no-preload-820576 --ip 192.168.85.2 --volume no-preload-820576:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:04:51.174340  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Running}}
	I1124 09:04:51.195074  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.216706  696018 cli_runner.go:164] Run: docker exec no-preload-820576 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:04:51.270513  696018 oci.go:144] the created container "no-preload-820576" has a running status.
	I1124 09:04:51.270555  696018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa...
	I1124 09:04:51.639069  696018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:04:51.669871  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.693409  696018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:04:51.693441  696018 kic_runner.go:114] Args: [docker exec --privileged no-preload-820576 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:04:51.754414  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.781590  696018 machine.go:94] provisionDockerMachine start ...
	I1124 09:04:51.781685  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:51.808597  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.809054  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:51.809092  696018 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:04:51.963230  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:04:51.963276  696018 ubuntu.go:182] provisioning hostname "no-preload-820576"
	I1124 09:04:51.963339  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:51.984069  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.984406  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:51.984432  696018 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-820576 && echo "no-preload-820576" | sudo tee /etc/hostname
	I1124 09:04:52.142431  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:04:52.142545  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.163141  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.163483  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:52.163520  696018 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820576/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:04:52.313074  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:04:52.313103  696018 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:04:52.313151  696018 ubuntu.go:190] setting up certificates
	I1124 09:04:52.313174  696018 provision.go:84] configureAuth start
	I1124 09:04:52.313241  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.333178  696018 provision.go:143] copyHostCerts
	I1124 09:04:52.333250  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:04:52.333267  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:04:52.333340  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:04:52.333454  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:04:52.333479  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:04:52.333527  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:04:52.333610  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:04:52.333631  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:04:52.333670  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:04:52.333745  696018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.no-preload-820576 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820576]
	I1124 09:04:52.372869  696018 provision.go:177] copyRemoteCerts
	I1124 09:04:52.372936  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:04:52.372984  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.391516  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.495715  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:04:52.515508  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:04:52.533110  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:04:52.549620  696018 provision.go:87] duration metric: took 236.431147ms to configureAuth
	I1124 09:04:52.549643  696018 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:04:52.549785  696018 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:52.549795  696018 machine.go:97] duration metric: took 768.185276ms to provisionDockerMachine
	I1124 09:04:52.549801  696018 client.go:176] duration metric: took 4.656107804s to LocalClient.Create
	I1124 09:04:52.549817  696018 start.go:167] duration metric: took 4.656176839s to libmachine.API.Create "no-preload-820576"
	I1124 09:04:52.549827  696018 start.go:293] postStartSetup for "no-preload-820576" (driver="docker")
	I1124 09:04:52.549837  696018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:04:52.549917  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:04:52.549957  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.567598  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.670209  696018 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:04:52.673794  696018 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:04:52.673819  696018 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:04:52.673829  696018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:04:52.673873  696018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:04:52.673954  696018 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:04:52.674055  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:04:52.681571  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:51.668051  695520 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:04:51.701732  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.724111  695520 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:04:51.724139  695520 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-128377 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:04:51.779671  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.808240  695520 machine.go:94] provisionDockerMachine start ...
	I1124 09:04:51.808514  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:51.833533  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.833868  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:51.833890  695520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:04:51.988683  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-128377
	
	I1124 09:04:51.988712  695520 ubuntu.go:182] provisioning hostname "old-k8s-version-128377"
	I1124 09:04:51.988769  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.008953  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.009275  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:52.009299  695520 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-128377 && echo "old-k8s-version-128377" | sudo tee /etc/hostname
	I1124 09:04:52.164712  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-128377
	
	I1124 09:04:52.164811  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.184388  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.184674  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:52.184701  695520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-128377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-128377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-128377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:04:52.328284  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:04:52.328315  695520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:04:52.328349  695520 ubuntu.go:190] setting up certificates
	I1124 09:04:52.328371  695520 provision.go:84] configureAuth start
	I1124 09:04:52.328437  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.347382  695520 provision.go:143] copyHostCerts
	I1124 09:04:52.347441  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:04:52.347449  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:04:52.347530  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:04:52.347615  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:04:52.347624  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:04:52.347646  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:04:52.347699  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:04:52.347707  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:04:52.347724  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:04:52.347767  695520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-128377 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-128377]
	I1124 09:04:52.449836  695520 provision.go:177] copyRemoteCerts
	I1124 09:04:52.449907  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:04:52.449955  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.467389  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.568756  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:04:52.590911  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:04:52.608291  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:04:52.625476  695520 provision.go:87] duration metric: took 297.076146ms to configureAuth
	I1124 09:04:52.625501  695520 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:04:52.625684  695520 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:04:52.625697  695520 machine.go:97] duration metric: took 817.329123ms to provisionDockerMachine
	I1124 09:04:52.625703  695520 client.go:176] duration metric: took 5.811878386s to LocalClient.Create
	I1124 09:04:52.625724  695520 start.go:167] duration metric: took 5.811947677s to libmachine.API.Create "old-k8s-version-128377"
	I1124 09:04:52.625737  695520 start.go:293] postStartSetup for "old-k8s-version-128377" (driver="docker")
	I1124 09:04:52.625751  695520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:04:52.625805  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:04:52.625861  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.643125  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.746507  695520 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:04:52.750419  695520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:04:52.750446  695520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:04:52.750471  695520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:04:52.750527  695520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:04:52.750621  695520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:04:52.750735  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:04:52.759275  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:52.779524  695520 start.go:296] duration metric: took 153.769147ms for postStartSetup
	I1124 09:04:52.779876  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.797331  695520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/config.json ...
	I1124 09:04:52.797607  695520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:04:52.797652  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.814633  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.914421  695520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:04:52.919231  695520 start.go:128] duration metric: took 6.107446039s to createHost
	I1124 09:04:52.919259  695520 start.go:83] releasing machines lock for "old-k8s-version-128377", held for 6.10762389s
	I1124 09:04:52.919326  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.937920  695520 ssh_runner.go:195] Run: cat /version.json
	I1124 09:04:52.937964  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.937993  695520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:04:52.938073  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.957005  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.957162  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:53.162492  695520 ssh_runner.go:195] Run: systemctl --version
	I1124 09:04:53.168749  695520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:04:53.173128  695520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:04:53.173198  695520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:04:53.196703  695520 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:04:53.196732  695520 start.go:496] detecting cgroup driver to use...
	I1124 09:04:53.196770  695520 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:04:53.196824  695520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:04:53.212821  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:04:53.226105  695520 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:04:53.226149  695520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:04:53.245323  695520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:04:53.261892  695520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:04:53.346225  695520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:04:53.440817  695520 docker.go:234] disabling docker service ...
	I1124 09:04:53.440886  695520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:04:53.466043  695520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:04:53.478621  695520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:04:53.566248  695520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:04:53.652228  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:04:53.665204  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:04:53.679300  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 09:04:53.689354  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:04:53.697996  695520 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:04:53.698043  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:04:53.706349  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.715138  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:04:53.724198  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.732594  695520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:04:53.740362  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:04:53.748766  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:04:53.757048  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:04:53.765265  695520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:04:53.772343  695520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:04:53.779254  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:53.856087  695520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:04:53.959050  695520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:04:53.959110  695520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:04:53.963133  695520 start.go:564] Will wait 60s for crictl version
	I1124 09:04:53.963185  695520 ssh_runner.go:195] Run: which crictl
	I1124 09:04:53.966895  695520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:04:53.994878  695520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:04:53.994934  695520 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.021265  695520 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.045827  695520 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 09:04:52.701569  696018 start.go:296] duration metric: took 151.731915ms for postStartSetup
	I1124 09:04:52.701858  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.719203  696018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:04:52.719424  696018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:04:52.719488  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.736084  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.835481  696018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:04:52.840061  696018 start.go:128] duration metric: took 4.94947332s to createHost
	I1124 09:04:52.840083  696018 start.go:83] releasing machines lock for "no-preload-820576", held for 4.94964132s
	I1124 09:04:52.840148  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.858132  696018 ssh_runner.go:195] Run: cat /version.json
	I1124 09:04:52.858160  696018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:04:52.858222  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.858246  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.877130  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.877482  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.975607  696018 ssh_runner.go:195] Run: systemctl --version
	I1124 09:04:53.031452  696018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:04:53.036065  696018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:04:53.036130  696018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:04:53.059999  696018 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:04:53.060024  696018 start.go:496] detecting cgroup driver to use...
	I1124 09:04:53.060062  696018 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:04:53.060105  696018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:04:53.074505  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:04:53.086089  696018 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:04:53.086143  696018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:04:53.101555  696018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:04:53.118093  696018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:04:53.204201  696018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:04:53.300933  696018 docker.go:234] disabling docker service ...
	I1124 09:04:53.301034  696018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:04:53.320036  696018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:04:53.331959  696018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:04:53.420508  696018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:04:53.513830  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:04:53.526253  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:04:53.540562  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:53.865082  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:04:53.876277  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:04:53.885584  696018 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:04:53.885655  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:04:53.895158  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.904766  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:04:53.913841  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.922747  696018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:04:53.932360  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:04:53.943272  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:04:53.952416  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:04:53.961850  696018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:04:53.969795  696018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:04:53.977270  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:54.067216  696018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:04:54.151776  696018 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:04:54.151849  696018 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:04:54.156309  696018 start.go:564] Will wait 60s for crictl version
	I1124 09:04:54.156367  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:54.160683  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:04:54.187130  696018 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:04:54.187193  696018 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.208524  696018 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.233294  696018 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1124 09:04:49.920675  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:49.921171  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:50.420805  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:50.421212  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:50.920534  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:54.046841  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:54.064168  695520 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:04:54.068915  695520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.079411  695520 kubeadm.go:884] updating cluster {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:04:54.079584  695520 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:04:54.079651  695520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:54.105064  695520 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:04:54.105092  695520 containerd.go:534] Images already preloaded, skipping extraction
	I1124 09:04:54.105153  695520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:54.131723  695520 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:04:54.131746  695520 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:04:54.131756  695520 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 containerd true true} ...
	I1124 09:04:54.131858  695520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-128377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:04:54.131921  695520 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:04:54.160918  695520 cni.go:84] Creating CNI manager for ""
	I1124 09:04:54.160940  695520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:04:54.160955  695520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:04:54.160976  695520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-128377 NodeName:old-k8s-version-128377 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:04:54.161123  695520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-128377"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:04:54.161190  695520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 09:04:54.169102  695520 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:04:54.169150  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:04:54.176962  695520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1124 09:04:54.191252  695520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:04:54.206931  695520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1124 09:04:54.220958  695520 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:04:54.225158  695520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.236116  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:54.319599  695520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:04:54.342135  695520 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377 for IP: 192.168.103.2
	I1124 09:04:54.342157  695520 certs.go:195] generating shared ca certs ...
	I1124 09:04:54.342176  695520 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.342355  695520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:04:54.342406  695520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:04:54.342416  695520 certs.go:257] generating profile certs ...
	I1124 09:04:54.342497  695520 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key
	I1124 09:04:54.342513  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt with IP's: []
	I1124 09:04:54.488402  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt ...
	I1124 09:04:54.488432  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt: {Name:mk87cd521056210340bc5798f0387b3f36dc4635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.488613  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key ...
	I1124 09:04:54.488628  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key: {Name:mk03c81f6da2f2b54dfd9fa0e30866e3372921ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.488712  695520 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1
	I1124 09:04:54.488729  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 09:04:54.543616  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 ...
	I1124 09:04:54.543654  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1: {Name:mk2f5faeeb1a8cba2153625fbd7d3a7e54f95aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.543851  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1 ...
	I1124 09:04:54.543873  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1: {Name:mk7ed4cadcafdc2e1a661255372b702ae6719654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.543964  695520 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt
	I1124 09:04:54.544040  695520 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key
	I1124 09:04:54.544132  695520 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key
	I1124 09:04:54.544150  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt with IP's: []
	I1124 09:04:54.594781  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt ...
	I1124 09:04:54.594837  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt: {Name:mk33ff647329a0bdf714fd27ddf109ec15b6d483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.595015  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key ...
	I1124 09:04:54.595034  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key: {Name:mk9bf52d92c35c053f63b6073f2a38e1ff2182d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.595287  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:04:54.595344  695520 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:04:54.595359  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:04:54.595395  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:04:54.595433  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:04:54.595484  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:04:54.595553  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:54.596350  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:04:54.616384  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:04:54.633998  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:04:54.651552  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:04:54.669737  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 09:04:54.686876  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:04:54.703726  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:04:54.720840  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:04:54.737534  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:04:54.757717  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:04:54.774715  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:04:54.791052  695520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:04:54.802968  695520 ssh_runner.go:195] Run: openssl version
	I1124 09:04:54.808893  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:04:54.816748  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.820220  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.820260  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.854133  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:04:54.862216  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:04:54.870277  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.873860  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.873906  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.910146  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:04:54.919148  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:04:54.927753  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.931870  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.931921  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.972285  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:04:54.981223  695520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:04:54.984999  695520 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:04:54.985067  695520 kubeadm.go:401] StartCluster: {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:04:54.985165  695520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:04:54.985213  695520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:04:55.012874  695520 cri.go:89] found id: ""
	I1124 09:04:55.012940  695520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:04:55.020831  695520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:04:55.029069  695520 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:04:55.029111  695520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:04:55.036334  695520 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:04:55.036348  695520 kubeadm.go:158] found existing configuration files:
	
	I1124 09:04:55.036384  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:04:55.044532  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:04:55.044579  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:04:55.051885  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:04:55.059335  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:04:55.059381  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:04:55.066924  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:04:55.075157  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:04:55.075202  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:04:55.082536  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:04:55.090276  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:04:55.090333  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:04:55.097848  695520 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:04:55.141844  695520 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 09:04:55.142222  695520 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:04:55.176293  695520 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:04:55.176360  695520 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:04:55.176399  695520 kubeadm.go:319] OS: Linux
	I1124 09:04:55.176522  695520 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:04:55.176607  695520 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:04:55.176692  695520 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:04:55.176788  695520 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:04:55.176861  695520 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:04:55.176926  695520 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:04:55.177000  695520 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:04:55.177072  695520 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:04:55.267260  695520 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:04:55.267430  695520 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:04:55.267573  695520 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 09:04:55.406819  695520 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:04:55.408942  695520 out.go:252]   - Generating certificates and keys ...
	I1124 09:04:55.409040  695520 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:04:55.409154  695520 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:04:55.535942  695520 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:04:55.747446  695520 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:04:56.231180  695520 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:04:56.348617  695520 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:04:56.564540  695520 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:04:56.564771  695520 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-128377] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:04:54.234417  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:54.252265  696018 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:04:54.256402  696018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.271173  696018 kubeadm.go:884] updating cluster {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:04:54.271376  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:54.585565  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:54.895614  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:55.213448  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:04:55.213537  696018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:55.248674  696018 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:04:55.248704  696018 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:04:55.248761  696018 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:55.248818  696018 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.248841  696018 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.248860  696018 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.248864  696018 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.248833  696018 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.248841  696018 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.249034  696018 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.250186  696018 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.250215  696018 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:55.250182  696018 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.250186  696018 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.250253  696018 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.250254  696018 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.250188  696018 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.250648  696018 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.411211  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139"
	I1124 09:04:55.411274  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.432666  696018 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:04:55.432717  696018 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.432775  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.436380  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810"
	I1124 09:04:55.436448  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.436570  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.438317  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b"
	I1124 09:04:55.438376  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.445544  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc"
	I1124 09:04:55.445608  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.462611  696018 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:04:55.462672  696018 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.462735  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.466873  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1124 09:04:55.466944  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1124 09:04:55.469707  696018 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:04:55.469760  696018 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.469761  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.469806  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.476188  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.24-0" and sha "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d"
	I1124 09:04:55.476260  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.476601  696018 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:04:55.476645  696018 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.476700  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.476760  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.483510  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46"
	I1124 09:04:55.483571  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.493634  696018 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 09:04:55.493674  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.493687  696018 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.493730  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.504559  696018 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d" in container runtime
	I1124 09:04:55.504599  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.504606  696018 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.504646  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.512866  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.512892  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.512910  696018 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:04:55.512950  696018 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.512990  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.526695  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.526717  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.526785  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.539513  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:04:55.539663  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:55.546674  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.546750  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.546715  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.564076  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.567023  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 09:04:55.567049  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.567061  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1124 09:04:55.567151  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.598524  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.598552  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.598652  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:04:55.598735  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:04:55.614879  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.624975  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:55.625072  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:55.679323  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.684055  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:04:55.684090  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.684124  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.684140  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:04:55.684150  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0
	I1124 09:04:55.684159  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.684160  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1124 09:04:55.684171  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1124 09:04:55.684244  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:55.736024  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 09:04:55.736135  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:55.746073  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.746108  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1124 09:04:55.746157  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:55.746175  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 09:04:55.746191  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (23728640 bytes)
	I1124 09:04:55.746248  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:55.801010  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 09:04:55.801049  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 09:04:55.808405  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.808441  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1124 09:04:55.880897  696018 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:55.880969  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:56.015999  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 09:04:56.068815  696018 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:56.068912  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:56.453297  696018 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1124 09:04:56.453371  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:57.304727  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.24-0: (1.235782073s)
	I1124 09:04:57.304763  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 09:04:57.304794  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:57.304806  696018 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:04:57.304847  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:57.304858  696018 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:57.304920  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:56.768431  695520 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:04:56.768677  695520 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-128377] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:04:57.042517  695520 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:04:57.135211  695520 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:04:57.487492  695520 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:04:57.487607  695520 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:04:57.647815  695520 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:04:57.788032  695520 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:04:58.007063  695520 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:04:58.262043  695520 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:04:58.262616  695520 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:04:58.265868  695520 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:04:55.921561  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:04:55.921607  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:58.266858  695520 out.go:252]   - Booting up control plane ...
	I1124 09:04:58.266989  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:04:58.267065  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:04:58.267746  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:04:58.282824  695520 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:04:58.283699  695520 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:04:58.283773  695520 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:04:58.419897  695520 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 09:04:58.797650  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.492766226s)
	I1124 09:04:58.797672  696018 ssh_runner.go:235] Completed: which crictl: (1.492732478s)
	I1124 09:04:58.797693  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:04:58.797722  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:58.797742  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:58.797763  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:59.494097  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:04:59.494141  696018 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:59.494193  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:59.494314  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:00.636087  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.141861944s)
	I1124 09:05:00.636150  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:05:00.636183  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:05:00.636184  696018 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.141835433s)
	I1124 09:05:00.636272  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:05:00.636277  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:01.829551  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.193240306s)
	I1124 09:05:01.829586  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:05:01.829561  696018 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.193259021s)
	I1124 09:05:01.829618  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:05:01.829656  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:05:01.829661  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:05:01.829741  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:02.922442  695520 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502768 seconds
	I1124 09:05:02.922650  695520 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:05:02.938003  695520 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:05:03.487168  695520 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:05:03.487569  695520 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-128377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:05:03.997647  695520 kubeadm.go:319] [bootstrap-token] Using token: jnao2u.ovlrxqviyhx4po41
	I1124 09:05:03.999063  695520 out.go:252]   - Configuring RBAC rules ...
	I1124 09:05:03.999223  695520 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:05:04.003823  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:05:04.010298  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:05:04.012923  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:05:04.015535  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:05:04.019043  695520 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:05:04.029389  695520 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:05:04.209549  695520 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:05:04.407855  695520 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:05:04.408750  695520 kubeadm.go:319] 
	I1124 09:05:04.408814  695520 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:05:04.408821  695520 kubeadm.go:319] 
	I1124 09:05:04.408930  695520 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:05:04.408949  695520 kubeadm.go:319] 
	I1124 09:05:04.408983  695520 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:05:04.409060  695520 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:05:04.409107  695520 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:05:04.409122  695520 kubeadm.go:319] 
	I1124 09:05:04.409207  695520 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:05:04.409227  695520 kubeadm.go:319] 
	I1124 09:05:04.409283  695520 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:05:04.409289  695520 kubeadm.go:319] 
	I1124 09:05:04.409340  695520 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:05:04.409401  695520 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:05:04.409519  695520 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:05:04.409531  695520 kubeadm.go:319] 
	I1124 09:05:04.409633  695520 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:05:04.409739  695520 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:05:04.409748  695520 kubeadm.go:319] 
	I1124 09:05:04.409856  695520 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jnao2u.ovlrxqviyhx4po41 \
	I1124 09:05:04.409989  695520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:05:04.410028  695520 kubeadm.go:319] 	--control-plane 
	I1124 09:05:04.410043  695520 kubeadm.go:319] 
	I1124 09:05:04.410157  695520 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:05:04.410168  695520 kubeadm.go:319] 
	I1124 09:05:04.410253  695520 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jnao2u.ovlrxqviyhx4po41 \
	I1124 09:05:04.410416  695520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:05:04.412734  695520 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:05:04.412863  695520 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:05:04.412887  695520 cni.go:84] Creating CNI manager for ""
	I1124 09:05:04.412895  695520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:04.414780  695520 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:05:00.922661  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:00.922710  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:04.415630  695520 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:05:04.420099  695520 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 09:05:04.420115  695520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:05:04.433073  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:05:05.091722  695520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:05:05.091870  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-128377 minikube.k8s.io/updated_at=2025_11_24T09_05_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=old-k8s-version-128377 minikube.k8s.io/primary=true
	I1124 09:05:05.092348  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:05.102498  695520 ops.go:34] apiserver oom_adj: -16
	I1124 09:05:05.174868  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:05.675283  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:06.175310  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:02.915588  696018 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.085815853s)
	I1124 09:05:02.915634  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.085954166s)
	I1124 09:05:02.915671  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:05:02.915639  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 09:05:02.915716  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 09:05:02.976753  696018 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:02.976825  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:03.348632  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:05:03.348678  696018 cache_images.go:125] Successfully loaded all cached images
	I1124 09:05:03.348686  696018 cache_images.go:94] duration metric: took 8.099965824s to LoadCachedImages
	I1124 09:05:03.348703  696018 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1124 09:05:03.348825  696018 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:05:03.348894  696018 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:05:03.376137  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:05:03.376168  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:03.376188  696018 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:05:03.376210  696018 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820576 NodeName:no-preload-820576 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:05:03.376350  696018 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-820576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:05:03.376422  696018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:05:03.385368  696018 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 09:05:03.385424  696018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:05:03.394095  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1124 09:05:03.394128  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:05:03.394180  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1124 09:05:03.394191  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 09:05:03.394205  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 09:05:03.394225  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:03.399712  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 09:05:03.399743  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1124 09:05:03.399797  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 09:05:03.399839  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1124 09:05:03.414063  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 09:05:03.448582  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 09:05:03.448623  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1124 09:05:03.941988  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:05:03.950659  696018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1124 09:05:03.964545  696018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:05:03.980698  696018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1124 09:05:03.994370  696018 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:05:03.999682  696018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:05:04.011951  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:04.105068  696018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:04.129581  696018 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576 for IP: 192.168.85.2
	I1124 09:05:04.129609  696018 certs.go:195] generating shared ca certs ...
	I1124 09:05:04.129631  696018 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.129796  696018 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:05:04.129861  696018 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:05:04.129876  696018 certs.go:257] generating profile certs ...
	I1124 09:05:04.129944  696018 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key
	I1124 09:05:04.129964  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt with IP's: []
	I1124 09:05:04.178331  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt ...
	I1124 09:05:04.178368  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt: {Name:mk7a6d48f62cb24db3b80fa6902658a2fab15360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.178586  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key ...
	I1124 09:05:04.178605  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key: {Name:mke761c4ec29e36beccc716dc800bc8fd841e3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.178724  696018 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632
	I1124 09:05:04.178748  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 09:05:04.417670  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 ...
	I1124 09:05:04.417694  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632: {Name:mk59a2d57d772e51aeeeb2a9a4dca760203e6d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.417874  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632 ...
	I1124 09:05:04.417897  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632: {Name:mkdb0be38fd80ef77438b49aa69b9308c6d28ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.418023  696018 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt
	I1124 09:05:04.418147  696018 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key
	I1124 09:05:04.418202  696018 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key
	I1124 09:05:04.418217  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt with IP's: []
	I1124 09:05:04.604435  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt ...
	I1124 09:05:04.604497  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt: {Name:mk5719f2112f16d39272baf4588ce9b65d33d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.604728  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key ...
	I1124 09:05:04.604746  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key: {Name:mk56d8ccc21a879d6506ee3380097e85fb4b4f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.605022  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:05:04.605073  696018 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:05:04.605084  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:05:04.605120  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:05:04.605160  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:05:04.605195  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:05:04.605369  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:05:04.606568  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:05:04.626964  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:05:04.644973  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:05:04.663649  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:05:04.681360  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:05:04.699027  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:05:04.716381  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:05:04.734298  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:05:04.752033  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:05:04.771861  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:05:04.789824  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:05:04.808313  696018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:05:04.826085  696018 ssh_runner.go:195] Run: openssl version
	I1124 09:05:04.834356  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:05:04.843772  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.848660  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.848725  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.887168  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:05:04.897113  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:05:04.907480  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.911694  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.911746  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.951326  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:05:04.961765  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:05:04.972056  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:05:04.976497  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:05:04.976554  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:05:05.017003  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:05:05.027292  696018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:05:05.031547  696018 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:05:05.031616  696018 kubeadm.go:401] StartCluster: {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:05:05.031711  696018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:05:05.031765  696018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:05:05.062044  696018 cri.go:89] found id: ""
	I1124 09:05:05.062126  696018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:05:05.071887  696018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:05:05.082157  696018 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:05:05.082217  696018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:05:05.091225  696018 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:05:05.091248  696018 kubeadm.go:158] found existing configuration files:
	
	I1124 09:05:05.091296  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:05:05.100600  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:05:05.100657  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:05:05.110555  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:05:05.119216  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:05:05.119288  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:05:05.127876  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:05:05.136154  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:05:05.136205  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:05:05.145077  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:05:05.154290  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:05:05.154338  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:05:05.162702  696018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:05:05.200662  696018 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:05:05.200757  696018 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:05:05.269623  696018 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:05:05.269714  696018 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:05:05.269770  696018 kubeadm.go:319] OS: Linux
	I1124 09:05:05.269842  696018 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:05:05.269920  696018 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:05:05.270003  696018 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:05:05.270084  696018 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:05:05.270155  696018 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:05:05.270223  696018 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:05:05.270303  696018 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:05:05.270377  696018 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:05:05.332844  696018 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:05:05.332992  696018 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:05:05.333150  696018 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:05:06.734694  696018 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:05:06.738817  696018 out.go:252]   - Generating certificates and keys ...
	I1124 09:05:06.738929  696018 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:05:06.739072  696018 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:05:06.832143  696018 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:05:06.955015  696018 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:05:07.027143  696018 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:05:07.115762  696018 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:05:07.265716  696018 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:05:07.265857  696018 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-820576] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 09:05:07.364684  696018 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:05:07.364865  696018 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-820576] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 09:05:07.523315  696018 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:05:07.590589  696018 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:05:07.746307  696018 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:05:07.746426  696018 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:05:07.869677  696018 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:05:07.978931  696018 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:05:08.053720  696018 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:05:08.085227  696018 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:05:08.160011  696018 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:05:08.160849  696018 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:05:08.165435  696018 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:05:05.923694  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:05.923742  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:06.675415  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:07.175277  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:07.676031  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.174962  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.675088  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:09.175102  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:09.675096  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:10.175027  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:10.675655  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:11.175703  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.166975  696018 out.go:252]   - Booting up control plane ...
	I1124 09:05:08.167117  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:05:08.167189  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:05:08.167816  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:05:08.183769  696018 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:05:08.183936  696018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:05:08.191856  696018 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:05:08.191990  696018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:05:08.192031  696018 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:05:08.308076  696018 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:05:08.308205  696018 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:05:09.309901  696018 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001908715s
	I1124 09:05:09.316051  696018 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:05:09.316157  696018 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 09:05:09.316247  696018 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:05:09.316315  696018 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:05:10.320869  696018 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004644301s
	I1124 09:05:10.832866  696018 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.516703459s
	I1124 09:05:12.317179  696018 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001080604s
	I1124 09:05:12.331544  696018 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:05:12.339378  696018 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:05:12.347526  696018 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:05:12.347705  696018 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-820576 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:05:12.354657  696018 kubeadm.go:319] [bootstrap-token] Using token: awoygq.wealvtzys3befsou
	I1124 09:05:12.355757  696018 out.go:252]   - Configuring RBAC rules ...
	I1124 09:05:12.355888  696018 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:05:12.359613  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:05:12.364202  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:05:12.366491  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:05:12.369449  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:05:12.371508  696018 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:05:12.722783  696018 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:05:13.137535  696018 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:05:13.723038  696018 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:05:13.724197  696018 kubeadm.go:319] 
	I1124 09:05:13.724302  696018 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:05:13.724317  696018 kubeadm.go:319] 
	I1124 09:05:13.724412  696018 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:05:13.724424  696018 kubeadm.go:319] 
	I1124 09:05:13.724520  696018 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:05:13.724630  696018 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:05:13.724716  696018 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:05:13.724730  696018 kubeadm.go:319] 
	I1124 09:05:13.724818  696018 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:05:13.724831  696018 kubeadm.go:319] 
	I1124 09:05:13.724897  696018 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:05:13.724906  696018 kubeadm.go:319] 
	I1124 09:05:13.724990  696018 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:05:13.725105  696018 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:05:13.725212  696018 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:05:13.725221  696018 kubeadm.go:319] 
	I1124 09:05:13.725338  696018 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:05:13.725493  696018 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:05:13.725510  696018 kubeadm.go:319] 
	I1124 09:05:13.725601  696018 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token awoygq.wealvtzys3befsou \
	I1124 09:05:13.725765  696018 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:05:13.725804  696018 kubeadm.go:319] 	--control-plane 
	I1124 09:05:13.725816  696018 kubeadm.go:319] 
	I1124 09:05:13.725934  696018 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:05:13.725944  696018 kubeadm.go:319] 
	I1124 09:05:13.726041  696018 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token awoygq.wealvtzys3befsou \
	I1124 09:05:13.726243  696018 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:05:13.728504  696018 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:05:13.728661  696018 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:05:13.728704  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:05:13.728716  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:13.730529  696018 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:05:10.924882  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:10.924923  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.109506  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:47578->192.168.76.2:8443: read: connection reset by peer
	I1124 09:05:11.421112  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.421646  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:11.920950  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.921496  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:12.421219  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:12.421692  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:12.921430  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:12.921911  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:13.420431  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:13.420926  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:13.920542  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:13.921060  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:14.420434  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:14.420859  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:11.675776  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:12.175192  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:12.675267  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.175941  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.675281  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.175267  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.675185  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.175391  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.675966  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.175887  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.675144  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.175281  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.260591  695520 kubeadm.go:1114] duration metric: took 12.168846115s to wait for elevateKubeSystemPrivileges
	I1124 09:05:17.260625  695520 kubeadm.go:403] duration metric: took 22.275566194s to StartCluster
	I1124 09:05:17.260655  695520 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:17.260738  695520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:05:17.261860  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:17.262121  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:05:17.262124  695520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:05:17.262197  695520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:05:17.262308  695520 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-128377"
	I1124 09:05:17.262334  695520 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-128377"
	I1124 09:05:17.262358  695520 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:05:17.262376  695520 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:05:17.262351  695520 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-128377"
	I1124 09:05:17.262443  695520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-128377"
	I1124 09:05:17.262844  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.263075  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.263365  695520 out.go:179] * Verifying Kubernetes components...
	I1124 09:05:17.264408  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:17.287510  695520 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-128377"
	I1124 09:05:17.287559  695520 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:05:17.287978  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.288769  695520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:13.732137  696018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:05:13.737711  696018 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1124 09:05:13.737726  696018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:05:13.752118  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:05:13.951744  696018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:05:13.951795  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.951847  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-820576 minikube.k8s.io/updated_at=2025_11_24T09_05_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=no-preload-820576 minikube.k8s.io/primary=true
	I1124 09:05:13.962047  696018 ops.go:34] apiserver oom_adj: -16
	I1124 09:05:14.022754  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.523671  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.023231  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.523083  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.023230  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.523666  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.022940  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.523444  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.290230  695520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:17.290253  695520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:05:17.290314  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:05:17.317679  695520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:17.317704  695520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:05:17.317768  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:05:17.319048  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:05:17.343853  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:05:17.366525  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:05:17.411998  695520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:17.447003  695520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:17.463082  695520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:17.632983  695520 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 09:05:17.634312  695520 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:05:17.888856  695520 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:05:18.022851  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:18.523601  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:18.589169  696018 kubeadm.go:1114] duration metric: took 4.637423043s to wait for elevateKubeSystemPrivileges
	I1124 09:05:18.589209  696018 kubeadm.go:403] duration metric: took 13.557597169s to StartCluster
	I1124 09:05:18.589237  696018 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:18.589321  696018 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:05:18.590747  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:18.590988  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:05:18.591000  696018 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:05:18.591095  696018 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:05:18.591206  696018 addons.go:70] Setting storage-provisioner=true in profile "no-preload-820576"
	I1124 09:05:18.591219  696018 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:05:18.591236  696018 addons.go:239] Setting addon storage-provisioner=true in "no-preload-820576"
	I1124 09:05:18.591251  696018 addons.go:70] Setting default-storageclass=true in profile "no-preload-820576"
	I1124 09:05:18.591275  696018 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:05:18.591283  696018 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820576"
	I1124 09:05:18.591664  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.591855  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.592299  696018 out.go:179] * Verifying Kubernetes components...
	I1124 09:05:18.593599  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:18.615163  696018 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:18.615451  696018 addons.go:239] Setting addon default-storageclass=true in "no-preload-820576"
	I1124 09:05:18.615530  696018 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:05:18.615851  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.616223  696018 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:18.616245  696018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:05:18.616301  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:05:18.646443  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:05:18.647885  696018 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:18.647963  696018 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:05:18.648059  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:05:18.675529  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:05:18.685797  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:05:18.752704  696018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:18.775922  696018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:18.800792  696018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:18.878758  696018 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 09:05:18.880873  696018 node_ready.go:35] waiting up to 6m0s for node "no-preload-820576" to be "Ready" ...
	I1124 09:05:19.096304  696018 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:05:14.921188  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:14.921633  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:15.421327  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:15.421818  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:15.920573  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:15.921034  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:16.421282  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:16.421841  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:16.921386  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:16.921942  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.420551  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:17.421007  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.920666  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:17.921181  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:18.420539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:18.421011  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:18.920611  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:18.921079  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:19.420539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:19.421004  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.889849  695520 addons.go:530] duration metric: took 627.656763ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:05:18.137738  695520 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-128377" context rescaled to 1 replicas
	W1124 09:05:19.637948  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	I1124 09:05:19.097398  696018 addons.go:530] duration metric: took 506.310963ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:05:19.383938  696018 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-820576" context rescaled to 1 replicas
	W1124 09:05:20.884989  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:19.920806  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:19.921207  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:20.420831  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:20.421312  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:20.920613  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:20.921185  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:21.420832  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:21.421240  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:21.920531  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:21.921019  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:22.420552  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:05:21.638057  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:23.638668  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:26.137883  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:23.383937  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:25.384443  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:27.421276  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:27.421318  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:05:28.138098  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:30.638120  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:27.884284  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:29.884474  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:32.384199  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:31.637332  695520 node_ready.go:49] node "old-k8s-version-128377" is "Ready"
	I1124 09:05:31.637368  695520 node_ready.go:38] duration metric: took 14.003009675s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:05:31.637385  695520 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:05:31.637443  695520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:05:31.650126  695520 api_server.go:72] duration metric: took 14.387953281s to wait for apiserver process to appear ...
	I1124 09:05:31.650156  695520 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:05:31.650179  695520 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:05:31.654078  695520 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:05:31.655253  695520 api_server.go:141] control plane version: v1.28.0
	I1124 09:05:31.655280  695520 api_server.go:131] duration metric: took 5.117021ms to wait for apiserver health ...
	I1124 09:05:31.655289  695520 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:05:31.658830  695520 system_pods.go:59] 8 kube-system pods found
	I1124 09:05:31.658868  695520 system_pods.go:61] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.658877  695520 system_pods.go:61] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.658889  695520 system_pods.go:61] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.658895  695520 system_pods.go:61] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.658906  695520 system_pods.go:61] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.658910  695520 system_pods.go:61] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.658916  695520 system_pods.go:61] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.658921  695520 system_pods.go:61] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.658927  695520 system_pods.go:74] duration metric: took 3.632262ms to wait for pod list to return data ...
	I1124 09:05:31.658936  695520 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:05:31.660923  695520 default_sa.go:45] found service account: "default"
	I1124 09:05:31.660942  695520 default_sa.go:55] duration metric: took 2.000088ms for default service account to be created ...
	I1124 09:05:31.660950  695520 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:05:31.664223  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:31.664263  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.664272  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.664280  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.664284  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.664287  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.664291  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.664294  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.664300  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.664333  695520 retry.go:31] will retry after 195.108791ms: missing components: kube-dns
	I1124 09:05:31.863438  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:31.863494  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.863505  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.863515  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.863520  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.863525  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.863528  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.863540  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.863557  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.863579  695520 retry.go:31] will retry after 244.252087ms: missing components: kube-dns
	I1124 09:05:32.111547  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:32.111586  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:32.111595  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:32.111603  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:32.111608  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:32.111614  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:32.111628  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:32.111634  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:32.111641  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:32.111660  695520 retry.go:31] will retry after 471.342676ms: missing components: kube-dns
	I1124 09:05:32.587354  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:32.587384  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Running
	I1124 09:05:32.587389  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:32.587393  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:32.587397  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:32.587402  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:32.587405  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:32.587408  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:32.587411  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Running
	I1124 09:05:32.587420  695520 system_pods.go:126] duration metric: took 926.463548ms to wait for k8s-apps to be running ...
	I1124 09:05:32.587428  695520 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:05:32.587503  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:32.602305  695520 system_svc.go:56] duration metric: took 14.864147ms WaitForService to wait for kubelet
	I1124 09:05:32.602336  695520 kubeadm.go:587] duration metric: took 15.340181249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:05:32.602385  695520 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:05:32.605212  695520 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:05:32.605242  695520 node_conditions.go:123] node cpu capacity is 8
	I1124 09:05:32.605271  695520 node_conditions.go:105] duration metric: took 2.87532ms to run NodePressure ...
	I1124 09:05:32.605293  695520 start.go:242] waiting for startup goroutines ...
	I1124 09:05:32.605308  695520 start.go:247] waiting for cluster config update ...
	I1124 09:05:32.605327  695520 start.go:256] writing updated cluster config ...
	I1124 09:05:32.605690  695520 ssh_runner.go:195] Run: rm -f paused
	I1124 09:05:32.610319  695520 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:32.614557  695520 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.619322  695520 pod_ready.go:94] pod "coredns-5dd5756b68-vxxnm" is "Ready"
	I1124 09:05:32.619349  695520 pod_ready.go:86] duration metric: took 4.765973ms for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.622417  695520 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.626873  695520 pod_ready.go:94] pod "etcd-old-k8s-version-128377" is "Ready"
	I1124 09:05:32.626900  695520 pod_ready.go:86] duration metric: took 4.45394ms for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.629800  695520 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.634310  695520 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-128377" is "Ready"
	I1124 09:05:32.634338  695520 pod_ready.go:86] duration metric: took 4.514426ms for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.637382  695520 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.015375  695520 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-128377" is "Ready"
	I1124 09:05:33.015406  695520 pod_ready.go:86] duration metric: took 378.000797ms for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.215146  695520 pod_ready.go:83] waiting for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.614362  695520 pod_ready.go:94] pod "kube-proxy-fpbs2" is "Ready"
	I1124 09:05:33.614392  695520 pod_ready.go:86] duration metric: took 399.215049ms for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.815166  695520 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.214969  695520 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-128377" is "Ready"
	I1124 09:05:34.214999  695520 pod_ready.go:86] duration metric: took 399.806564ms for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.215011  695520 pod_ready.go:40] duration metric: took 1.604660669s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:34.261989  695520 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:05:34.263612  695520 out.go:203] 
	W1124 09:05:34.264723  695520 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:05:34.265770  695520 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:05:34.267170  695520 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-128377" cluster and "default" namespace by default
	I1124 09:05:32.422898  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:32.423021  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:05:32.423106  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:05:32.453902  685562 cri.go:89] found id: "1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:05:32.453922  685562 cri.go:89] found id: "4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	I1124 09:05:32.453927  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:05:32.453929  685562 cri.go:89] found id: ""
	I1124 09:05:32.453937  685562 logs.go:282] 3 containers: [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:05:32.454000  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.458469  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.462439  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.466262  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:05:32.466335  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:05:32.496086  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:05:32.496112  685562 cri.go:89] found id: ""
	I1124 09:05:32.496122  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:05:32.496186  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.500443  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:05:32.500532  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:05:32.528567  685562 cri.go:89] found id: ""
	I1124 09:05:32.528602  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.528610  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:05:32.528617  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:05:32.528677  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:05:32.557355  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:05:32.557375  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:05:32.557379  685562 cri.go:89] found id: ""
	I1124 09:05:32.557388  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:05:32.557445  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.561666  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.565691  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:05:32.565776  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:05:32.594818  685562 cri.go:89] found id: ""
	I1124 09:05:32.594841  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.594848  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:05:32.594855  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:05:32.594900  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:05:32.625049  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:05:32.625068  685562 cri.go:89] found id: "87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0"
	I1124 09:05:32.625073  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:05:32.625078  685562 cri.go:89] found id: ""
	I1124 09:05:32.625087  685562 logs.go:282] 3 containers: [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d 87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:05:32.625142  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.630042  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.634965  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.639315  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:05:32.639376  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:05:32.669355  685562 cri.go:89] found id: ""
	I1124 09:05:32.669384  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.669392  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:05:32.669398  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:05:32.669449  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:05:32.697559  685562 cri.go:89] found id: ""
	I1124 09:05:32.697586  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.697596  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:05:32.697610  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:05:32.697645  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:05:32.736120  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:05:32.736153  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:05:32.768484  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:05:32.768526  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:05:32.836058  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:05:32.836100  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:05:32.853541  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:05:32.853613  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 09:05:33.384739  696018 node_ready.go:49] node "no-preload-820576" is "Ready"
	I1124 09:05:33.384778  696018 node_ready.go:38] duration metric: took 14.503869435s for node "no-preload-820576" to be "Ready" ...
	I1124 09:05:33.384797  696018 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:05:33.384861  696018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:05:33.401268  696018 api_server.go:72] duration metric: took 14.81022929s to wait for apiserver process to appear ...
	I1124 09:05:33.401299  696018 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:05:33.401324  696018 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 09:05:33.406015  696018 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 09:05:33.407175  696018 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:05:33.407215  696018 api_server.go:131] duration metric: took 5.908148ms to wait for apiserver health ...
	I1124 09:05:33.407226  696018 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:05:33.410293  696018 system_pods.go:59] 8 kube-system pods found
	I1124 09:05:33.410331  696018 system_pods.go:61] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.410338  696018 system_pods.go:61] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.410346  696018 system_pods.go:61] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.410352  696018 system_pods.go:61] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.410360  696018 system_pods.go:61] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.410365  696018 system_pods.go:61] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.410369  696018 system_pods.go:61] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.410382  696018 system_pods.go:61] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.410391  696018 system_pods.go:74] duration metric: took 3.156993ms to wait for pod list to return data ...
	I1124 09:05:33.410403  696018 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:05:33.413158  696018 default_sa.go:45] found service account: "default"
	I1124 09:05:33.413182  696018 default_sa.go:55] duration metric: took 2.772178ms for default service account to be created ...
	I1124 09:05:33.413192  696018 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:05:33.416818  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.416849  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.416856  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.416863  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.416868  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.416874  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.416879  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.416884  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.416891  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.416935  696018 retry.go:31] will retry after 275.944352ms: missing components: kube-dns
	I1124 09:05:33.697203  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.697247  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.697259  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.697269  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.697274  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.697285  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.697290  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.697297  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.697304  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.697327  696018 retry.go:31] will retry after 278.68714ms: missing components: kube-dns
	I1124 09:05:33.979933  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.979971  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.979977  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.979984  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.979987  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.979991  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.979994  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.979998  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.980003  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.980020  696018 retry.go:31] will retry after 448.083964ms: missing components: kube-dns
	I1124 09:05:34.432301  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:34.432341  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Running
	I1124 09:05:34.432350  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:34.432355  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:34.432362  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:34.432369  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:34.432374  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:34.432379  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:34.432384  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Running
	I1124 09:05:34.432395  696018 system_pods.go:126] duration metric: took 1.019195458s to wait for k8s-apps to be running ...
	I1124 09:05:34.432410  696018 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:05:34.432534  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:34.451401  696018 system_svc.go:56] duration metric: took 18.978773ms WaitForService to wait for kubelet
	I1124 09:05:34.451444  696018 kubeadm.go:587] duration metric: took 15.860405681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:05:34.451483  696018 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:05:34.454386  696018 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:05:34.454410  696018 node_conditions.go:123] node cpu capacity is 8
	I1124 09:05:34.454427  696018 node_conditions.go:105] duration metric: took 2.938205ms to run NodePressure ...
	I1124 09:05:34.454440  696018 start.go:242] waiting for startup goroutines ...
	I1124 09:05:34.454450  696018 start.go:247] waiting for cluster config update ...
	I1124 09:05:34.454478  696018 start.go:256] writing updated cluster config ...
	I1124 09:05:34.454771  696018 ssh_runner.go:195] Run: rm -f paused
	I1124 09:05:34.459160  696018 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:34.462567  696018 pod_ready.go:83] waiting for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.466303  696018 pod_ready.go:94] pod "coredns-7d764666f9-b6dpn" is "Ready"
	I1124 09:05:34.466324  696018 pod_ready.go:86] duration metric: took 3.738029ms for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.468156  696018 pod_ready.go:83] waiting for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.471750  696018 pod_ready.go:94] pod "etcd-no-preload-820576" is "Ready"
	I1124 09:05:34.471775  696018 pod_ready.go:86] duration metric: took 3.597676ms for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.473507  696018 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.477092  696018 pod_ready.go:94] pod "kube-apiserver-no-preload-820576" is "Ready"
	I1124 09:05:34.477115  696018 pod_ready.go:86] duration metric: took 3.588223ms for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.478724  696018 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.862953  696018 pod_ready.go:94] pod "kube-controller-manager-no-preload-820576" is "Ready"
	I1124 09:05:34.862977  696018 pod_ready.go:86] duration metric: took 384.235741ms for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.063039  696018 pod_ready.go:83] waiting for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.463183  696018 pod_ready.go:94] pod "kube-proxy-vz24l" is "Ready"
	I1124 09:05:35.463217  696018 pod_ready.go:86] duration metric: took 400.149042ms for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.664151  696018 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:36.063590  696018 pod_ready.go:94] pod "kube-scheduler-no-preload-820576" is "Ready"
	I1124 09:05:36.063619  696018 pod_ready.go:86] duration metric: took 399.441074ms for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:36.063632  696018 pod_ready.go:40] duration metric: took 1.604443296s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:36.110852  696018 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:05:36.112796  696018 out.go:179] * Done! kubectl is now configured to use "no-preload-820576" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	92908e44718b7       56cc512116c8f       7 seconds ago       Running             busybox                   0                   1ee15af433557       busybox                                          default
	a7a841ea7303a       ead0a4a53df89       12 seconds ago      Running             coredns                   0                   5cd1e9dd6b4b4       coredns-5dd5756b68-vxxnm                         kube-system
	a9a5857553e67       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   6128b1854bc49       storage-provisioner                              kube-system
	818537e08c060       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   cd819a24f784f       kindnet-gbp66                                    kube-system
	370631aaaf577       ea1030da44aa1       26 seconds ago      Running             kube-proxy                0                   17a629fbc9de7       kube-proxy-fpbs2                                 kube-system
	f5eddecfb179f       f6f496300a2ae       44 seconds ago      Running             kube-scheduler            0                   d4658a7b318ec       kube-scheduler-old-k8s-version-128377            kube-system
	5d9ec22e03b8b       4be79c38a4bab       44 seconds ago      Running             kube-controller-manager   0                   f3a2eced02a3b       kube-controller-manager-old-k8s-version-128377   kube-system
	842bd9db2d84b       bb5e0dde9054c       44 seconds ago      Running             kube-apiserver            0                   879c975eb1a53       kube-apiserver-old-k8s-version-128377            kube-system
	8df3112d99751       73deb9a3f7025       44 seconds ago      Running             etcd                      0                   78f7483f85b14       etcd-old-k8s-version-128377                      kube-system
	
	
	==> containerd <==
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.013913791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vxxnm,Uid:b84bae0f-9f75-4d1c-b2ed-da0c10a141cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cd1e9dd6b4b4d2ac225fd496f6fac6cfc490bdb385b217119ffd695f763abf3\""
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.016899714Z" level=info msg="CreateContainer within sandbox \"5cd1e9dd6b4b4d2ac225fd496f6fac6cfc490bdb385b217119ffd695f763abf3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.024116931Z" level=info msg="Container a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.030290587Z" level=info msg="CreateContainer within sandbox \"5cd1e9dd6b4b4d2ac225fd496f6fac6cfc490bdb385b217119ffd695f763abf3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5\""
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.030773995Z" level=info msg="StartContainer for \"a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5\""
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.031567693Z" level=info msg="connecting to shim a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5" address="unix:///run/containerd/s/7e80e31b141e93e01901781df29b4edcac7d62ec3fd02a2cc1cde1ffde438980" protocol=ttrpc version=3
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.070950416Z" level=info msg="StartContainer for \"a9a5857553e67019e47641c1970bb0d5555afd6b608c94a94501dd485efac0c4\" returns successfully"
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.075707267Z" level=info msg="StartContainer for \"a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5\" returns successfully"
	Nov 24 09:05:34 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:34.747845169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bfaec734-d874-4dcb-b31f-feb87adccfca,Namespace:default,Attempt:0,}"
	Nov 24 09:05:34 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:34.786693345Z" level=info msg="connecting to shim 1ee15af4335571d5c2c1f8cf460b21232bfc82973349a4c00a86f5a2545492a2" address="unix:///run/containerd/s/b51cd8663d01a7c675d7f65aecc44f4b6281e3382088734fe56170e879775890" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 09:05:34 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:34.851781414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bfaec734-d874-4dcb-b31f-feb87adccfca,Namespace:default,Attempt:0,} returns sandbox id \"1ee15af4335571d5c2c1f8cf460b21232bfc82973349a4c00a86f5a2545492a2\""
	Nov 24 09:05:34 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:34.853515051Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.357982384Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.358604580Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.359790616Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.361443799Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.361898949Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.508337162s"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.361934177Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.363533599Z" level=info msg="CreateContainer within sandbox \"1ee15af4335571d5c2c1f8cf460b21232bfc82973349a4c00a86f5a2545492a2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.369396201Z" level=info msg="Container 92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.374660363Z" level=info msg="CreateContainer within sandbox \"1ee15af4335571d5c2c1f8cf460b21232bfc82973349a4c00a86f5a2545492a2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9\""
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.375182989Z" level=info msg="StartContainer for \"92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9\""
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.376051696Z" level=info msg="connecting to shim 92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9" address="unix:///run/containerd/s/b51cd8663d01a7c675d7f65aecc44f4b6281e3382088734fe56170e879775890" protocol=ttrpc version=3
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.425776823Z" level=info msg="StartContainer for \"92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9\" returns successfully"
	Nov 24 09:05:43 old-k8s-version-128377 containerd[661]: E1124 09:05:43.526421     661 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54326 - 65005 "HINFO IN 6565264189616162908.3935264129304859187. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029224592s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-128377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-128377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=old-k8s-version-128377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_05_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-128377
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:05:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:05:35 +0000   Mon, 24 Nov 2025 09:05:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:05:35 +0000   Mon, 24 Nov 2025 09:05:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:05:35 +0000   Mon, 24 Nov 2025 09:05:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:05:35 +0000   Mon, 24 Nov 2025 09:05:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-128377
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                220a6d4b-4a36-435b-ad8f-2d418f4618a1
	  Boot ID:                    f052cd47-57de-4521-b5fb-139979fdced9
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-vxxnm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-128377                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-gbp66                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-128377             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-128377    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-fpbs2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-128377             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node old-k8s-version-128377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-128377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node old-k8s-version-128377 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-128377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-128377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-128377 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-128377 event: Registered Node old-k8s-version-128377 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-128377 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [8df3112d99751cf0ed66add055e0df50e3c944dbb66b787e2e3ae37efbec7d4e] <==
	{"level":"info","ts":"2025-11-24T09:05:00.107581Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:05:00.107626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:05:00.107753Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:05:00.10778Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:05:00.10887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T09:05:00.108869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-24T09:05:01.710895Z","caller":"traceutil/trace.go:171","msg":"trace[1442253581] transaction","detail":"{read_only:false; response_revision:20; number_of_response:1; }","duration":"170.61339ms","start":"2025-11-24T09:05:01.540258Z","end":"2025-11-24T09:05:01.710871Z","steps":["trace[1442253581] 'process raft request'  (duration: 170.544438ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711011Z","caller":"traceutil/trace.go:171","msg":"trace[699662152] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"172.264745ms","start":"2025-11-24T09:05:01.538726Z","end":"2025-11-24T09:05:01.710991Z","steps":["trace[699662152] 'process raft request'  (duration: 172.04013ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:05:01.711031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.576061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-9x9d8\" ","response":"range_response_count:1 size:895"}
	{"level":"info","ts":"2025-11-24T09:05:01.710896Z","caller":"traceutil/trace.go:171","msg":"trace[1006472868] transaction","detail":"{read_only:false; response_revision:18; number_of_response:1; }","duration":"172.691781ms","start":"2025-11-24T09:05:01.538162Z","end":"2025-11-24T09:05:01.710854Z","steps":["trace[1006472868] 'process raft request'  (duration: 109.125575ms)","trace[1006472868] 'compare'  (duration: 63.355357ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:05:01.710915Z","caller":"traceutil/trace.go:171","msg":"trace[981263403] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"170.391166ms","start":"2025-11-24T09:05:01.540518Z","end":"2025-11-24T09:05:01.710909Z","steps":["trace[981263403] 'process raft request'  (duration: 170.307811ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711086Z","caller":"traceutil/trace.go:171","msg":"trace[1918024405] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-9x9d8; range_end:; response_count:1; response_revision:22; }","duration":"172.654948ms","start":"2025-11-24T09:05:01.538422Z","end":"2025-11-24T09:05:01.711077Z","steps":["trace[1918024405] 'agreement among raft nodes before linearized reading'  (duration: 172.512588ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.710914Z","caller":"traceutil/trace.go:171","msg":"trace[1488131719] linearizableReadLoop","detail":"{readStateIndex:22; appliedIndex:18; }","duration":"172.460174ms","start":"2025-11-24T09:05:01.53844Z","end":"2025-11-24T09:05:01.7109Z","steps":["trace[1488131719] 'read index received'  (duration: 25.895675ms)","trace[1488131719] 'applied index is now lower than readState.Index'  (duration: 146.559971ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:05:01.711054Z","caller":"traceutil/trace.go:171","msg":"trace[1678514513] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"149.8797ms","start":"2025-11-24T09:05:01.561163Z","end":"2025-11-24T09:05:01.711042Z","steps":["trace[1678514513] 'process raft request'  (duration: 149.700045ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711435Z","caller":"traceutil/trace.go:171","msg":"trace[2085549652] transaction","detail":"{read_only:false; response_revision:23; number_of_response:1; }","duration":"144.831606ms","start":"2025-11-24T09:05:01.566593Z","end":"2025-11-24T09:05:01.711425Z","steps":["trace[2085549652] 'process raft request'  (duration: 144.652194ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711454Z","caller":"traceutil/trace.go:171","msg":"trace[1776690454] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"143.564662ms","start":"2025-11-24T09:05:01.567876Z","end":"2025-11-24T09:05:01.71144Z","steps":["trace[1776690454] 'process raft request'  (duration: 143.429904ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:05:01.711724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.213558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-11-24T09:05:01.711757Z","caller":"traceutil/trace.go:171","msg":"trace[366826393] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:25; }","duration":"146.253881ms","start":"2025-11-24T09:05:01.565494Z","end":"2025-11-24T09:05:01.711748Z","steps":["trace[366826393] 'agreement among raft nodes before linearized reading'  (duration: 146.18478ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711931Z","caller":"traceutil/trace.go:171","msg":"trace[1923893862] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"137.068438ms","start":"2025-11-24T09:05:01.574851Z","end":"2025-11-24T09:05:01.711919Z","steps":["trace[1923893862] 'process raft request'  (duration: 136.481982ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:05:01.712125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.955875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T09:05:01.712163Z","caller":"traceutil/trace.go:171","msg":"trace[90940555] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:25; }","duration":"172.012061ms","start":"2025-11-24T09:05:01.54014Z","end":"2025-11-24T09:05:01.712153Z","steps":["trace[90940555] 'agreement among raft nodes before linearized reading'  (duration: 171.930715ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:05:01.714609Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.250502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-128377\" ","response":"range_response_count:1 size:3558"}
	{"level":"info","ts":"2025-11-24T09:05:01.714708Z","caller":"traceutil/trace.go:171","msg":"trace[322045522] range","detail":"{range_begin:/registry/minions/old-k8s-version-128377; range_end:; response_count:1; response_revision:25; }","duration":"175.353553ms","start":"2025-11-24T09:05:01.539338Z","end":"2025-11-24T09:05:01.714691Z","steps":["trace[322045522] 'agreement among raft nodes before linearized reading'  (duration: 172.031487ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:03.559324Z","caller":"traceutil/trace.go:171","msg":"trace[627044044] transaction","detail":"{read_only:false; response_revision:204; number_of_response:1; }","duration":"100.594994ms","start":"2025-11-24T09:05:03.458371Z","end":"2025-11-24T09:05:03.558966Z","steps":["trace[627044044] 'process raft request'  (duration: 98.72439ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:11.43815Z","caller":"traceutil/trace.go:171","msg":"trace[324713988] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"136.243687ms","start":"2025-11-24T09:05:11.301878Z","end":"2025-11-24T09:05:11.438122Z","steps":["trace[324713988] 'process raft request'  (duration: 135.577137ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:05:44 up  3:48,  0 user,  load average: 4.43, 3.43, 10.79
	Linux old-k8s-version-128377 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [818537e08c0605796949e72c73a034b7d5f104ce598d4a12f0ed8bf30de9c646] <==
	I1124 09:05:21.342277       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:05:21.342547       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 09:05:21.342705       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:05:21.342728       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:05:21.342756       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:05:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:05:21.545109       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:05:21.545137       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:05:21.545150       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:05:21.545827       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:05:22.046295       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:05:22.046329       1 metrics.go:72] Registering metrics
	I1124 09:05:22.046391       1 controller.go:711] "Syncing nftables rules"
	I1124 09:05:31.547663       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 09:05:31.547728       1 main.go:301] handling current node
	I1124 09:05:41.547315       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 09:05:41.547363       1 main.go:301] handling current node
	
	
	==> kube-apiserver [842bd9db2d84b65b054e4b006bfb9c11b98ac3cdcbe13cd821183480cd046d8a] <==
	I1124 09:05:01.506809       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 09:05:01.506838       1 aggregator.go:166] initial CRD sync complete...
	I1124 09:05:01.506846       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 09:05:01.506863       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:05:01.506869       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:05:01.508109       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 09:05:01.508757       1 shared_informer.go:318] Caches are synced for configmaps
	E1124 09:05:01.537227       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 09:05:01.741694       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:05:02.411561       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:05:02.415133       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:05:02.415155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:05:02.826831       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:05:02.865354       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:05:02.945781       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:05:02.951178       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 09:05:02.952085       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 09:05:02.955858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:05:03.457945       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 09:05:04.197911       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 09:05:04.208245       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:05:04.218442       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 09:05:17.015236       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 09:05:17.165046       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:05:17.165047       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5d9ec22e03b8b0446d34a5b300037519eb0aa0be6b1e6c451907abb271f71839] <==
	I1124 09:05:16.510194       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-128377"
	I1124 09:05:16.510252       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 09:05:16.516579       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 09:05:16.831807       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:05:16.890844       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:05:16.890883       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 09:05:17.019027       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 09:05:17.175390       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gbp66"
	I1124 09:05:17.176958       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fpbs2"
	I1124 09:05:17.325895       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vxxnm"
	I1124 09:05:17.332721       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-x5sl2"
	I1124 09:05:17.343264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="324.364712ms"
	I1124 09:05:17.351654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.320995ms"
	I1124 09:05:17.351793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.483µs"
	I1124 09:05:17.672071       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 09:05:17.682409       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-x5sl2"
	I1124 09:05:17.690482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.456609ms"
	I1124 09:05:17.698725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.176655ms"
	I1124 09:05:17.698851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.584µs"
	I1124 09:05:31.598337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.212µs"
	I1124 09:05:31.631586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.266µs"
	I1124 09:05:32.360508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.431µs"
	I1124 09:05:32.386954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.987919ms"
	I1124 09:05:32.387048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.305µs"
	I1124 09:05:36.514110       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [370631aaaf577fb6a343282108f71bb03e72ef6024de9d9f8e2a2eeb7e16e746] <==
	I1124 09:05:17.831726       1 server_others.go:69] "Using iptables proxy"
	I1124 09:05:17.841216       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1124 09:05:17.866087       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:05:17.868989       1 server_others.go:152] "Using iptables Proxier"
	I1124 09:05:17.869038       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 09:05:17.869048       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 09:05:17.869091       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 09:05:17.869396       1 server.go:846] "Version info" version="v1.28.0"
	I1124 09:05:17.869419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:05:17.870089       1 config.go:188] "Starting service config controller"
	I1124 09:05:17.870115       1 config.go:315] "Starting node config controller"
	I1124 09:05:17.870130       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 09:05:17.870125       1 config.go:97] "Starting endpoint slice config controller"
	I1124 09:05:17.870157       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 09:05:17.870135       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 09:05:17.970983       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 09:05:17.970991       1 shared_informer.go:318] Caches are synced for service config
	I1124 09:05:17.970967       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f5eddecfb179fe94de6b3892600fc1870efa5679c82874d72a3b301753e6f7d4] <==
	E1124 09:05:01.478877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 09:05:01.478878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 09:05:01.478887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 09:05:01.478907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 09:05:01.478997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 09:05:01.479055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 09:05:01.479077       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 09:05:01.479125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 09:05:02.313819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 09:05:02.313863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 09:05:02.319417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 09:05:02.319451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 09:05:02.429310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 09:05:02.429356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 09:05:02.538603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 09:05:02.538660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 09:05:02.549098       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 09:05:02.549140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 09:05:02.661900       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 09:05:02.661937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1124 09:05:02.666268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 09:05:02.666312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 09:05:02.688142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 09:05:02.688189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1124 09:05:03.073951       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 09:05:16 old-k8s-version-128377 kubelet[1521]: I1124 09:05:16.342896    1521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.183175    1521 topology_manager.go:215] "Topology Admit Handler" podUID="52128126-550d-4795-9fa1-e1d3d9510dd3" podNamespace="kube-system" podName="kube-proxy-fpbs2"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.188113    1521 topology_manager.go:215] "Topology Admit Handler" podUID="49954742-ea7f-466f-80d8-7d6ac88ce36c" podNamespace="kube-system" podName="kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338200    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzbjt\" (UniqueName: \"kubernetes.io/projected/52128126-550d-4795-9fa1-e1d3d9510dd3-kube-api-access-vzbjt\") pod \"kube-proxy-fpbs2\" (UID: \"52128126-550d-4795-9fa1-e1d3d9510dd3\") " pod="kube-system/kube-proxy-fpbs2"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338280    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/49954742-ea7f-466f-80d8-7d6ac88ce36c-cni-cfg\") pod \"kindnet-gbp66\" (UID: \"49954742-ea7f-466f-80d8-7d6ac88ce36c\") " pod="kube-system/kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338319    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52128126-550d-4795-9fa1-e1d3d9510dd3-lib-modules\") pod \"kube-proxy-fpbs2\" (UID: \"52128126-550d-4795-9fa1-e1d3d9510dd3\") " pod="kube-system/kube-proxy-fpbs2"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338351    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49954742-ea7f-466f-80d8-7d6ac88ce36c-lib-modules\") pod \"kindnet-gbp66\" (UID: \"49954742-ea7f-466f-80d8-7d6ac88ce36c\") " pod="kube-system/kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338392    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/52128126-550d-4795-9fa1-e1d3d9510dd3-kube-proxy\") pod \"kube-proxy-fpbs2\" (UID: \"52128126-550d-4795-9fa1-e1d3d9510dd3\") " pod="kube-system/kube-proxy-fpbs2"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338424    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49954742-ea7f-466f-80d8-7d6ac88ce36c-xtables-lock\") pod \"kindnet-gbp66\" (UID: \"49954742-ea7f-466f-80d8-7d6ac88ce36c\") " pod="kube-system/kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338473    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5l7\" (UniqueName: \"kubernetes.io/projected/49954742-ea7f-466f-80d8-7d6ac88ce36c-kube-api-access-cd5l7\") pod \"kindnet-gbp66\" (UID: \"49954742-ea7f-466f-80d8-7d6ac88ce36c\") " pod="kube-system/kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338537    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52128126-550d-4795-9fa1-e1d3d9510dd3-xtables-lock\") pod \"kube-proxy-fpbs2\" (UID: \"52128126-550d-4795-9fa1-e1d3d9510dd3\") " pod="kube-system/kube-proxy-fpbs2"
	Nov 24 09:05:18 old-k8s-version-128377 kubelet[1521]: I1124 09:05:18.914069    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fpbs2" podStartSLOduration=1.913988204 podCreationTimestamp="2025-11-24 09:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:18.331224336 +0000 UTC m=+14.156867889" watchObservedRunningTime="2025-11-24 09:05:18.913988204 +0000 UTC m=+14.739631764"
	Nov 24 09:05:21 old-k8s-version-128377 kubelet[1521]: I1124 09:05:21.337175    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gbp66" podStartSLOduration=1.258069975 podCreationTimestamp="2025-11-24 09:05:17 +0000 UTC" firstStartedPulling="2025-11-24 09:05:17.956037798 +0000 UTC m=+13.781681343" lastFinishedPulling="2025-11-24 09:05:21.035088666 +0000 UTC m=+16.860732211" observedRunningTime="2025-11-24 09:05:21.33698865 +0000 UTC m=+17.162632223" watchObservedRunningTime="2025-11-24 09:05:21.337120843 +0000 UTC m=+17.162764404"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.576686    1521 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.597206    1521 topology_manager.go:215] "Topology Admit Handler" podUID="7e4f56c0-0b49-47cd-9278-129ad898b781" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.598949    1521 topology_manager.go:215] "Topology Admit Handler" podUID="b84bae0f-9f75-4d1c-b2ed-da0c10a141cf" podNamespace="kube-system" podName="coredns-5dd5756b68-vxxnm"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.745876    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7e4f56c0-0b49-47cd-9278-129ad898b781-tmp\") pod \"storage-provisioner\" (UID: \"7e4f56c0-0b49-47cd-9278-129ad898b781\") " pod="kube-system/storage-provisioner"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.746005    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b84bae0f-9f75-4d1c-b2ed-da0c10a141cf-config-volume\") pod \"coredns-5dd5756b68-vxxnm\" (UID: \"b84bae0f-9f75-4d1c-b2ed-da0c10a141cf\") " pod="kube-system/coredns-5dd5756b68-vxxnm"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.746049    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87ck\" (UniqueName: \"kubernetes.io/projected/b84bae0f-9f75-4d1c-b2ed-da0c10a141cf-kube-api-access-s87ck\") pod \"coredns-5dd5756b68-vxxnm\" (UID: \"b84bae0f-9f75-4d1c-b2ed-da0c10a141cf\") " pod="kube-system/coredns-5dd5756b68-vxxnm"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.746075    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp79g\" (UniqueName: \"kubernetes.io/projected/7e4f56c0-0b49-47cd-9278-129ad898b781-kube-api-access-mp79g\") pod \"storage-provisioner\" (UID: \"7e4f56c0-0b49-47cd-9278-129ad898b781\") " pod="kube-system/storage-provisioner"
	Nov 24 09:05:32 old-k8s-version-128377 kubelet[1521]: I1124 09:05:32.360059    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vxxnm" podStartSLOduration=15.360007602 podCreationTimestamp="2025-11-24 09:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:32.35995945 +0000 UTC m=+28.185603012" watchObservedRunningTime="2025-11-24 09:05:32.360007602 +0000 UTC m=+28.185651165"
	Nov 24 09:05:32 old-k8s-version-128377 kubelet[1521]: I1124 09:05:32.379733    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.379681272 podCreationTimestamp="2025-11-24 09:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:32.370112867 +0000 UTC m=+28.195756426" watchObservedRunningTime="2025-11-24 09:05:32.379681272 +0000 UTC m=+28.205324835"
	Nov 24 09:05:34 old-k8s-version-128377 kubelet[1521]: I1124 09:05:34.439352    1521 topology_manager.go:215] "Topology Admit Handler" podUID="bfaec734-d874-4dcb-b31f-feb87adccfca" podNamespace="default" podName="busybox"
	Nov 24 09:05:34 old-k8s-version-128377 kubelet[1521]: I1124 09:05:34.561236    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwqg6\" (UniqueName: \"kubernetes.io/projected/bfaec734-d874-4dcb-b31f-feb87adccfca-kube-api-access-qwqg6\") pod \"busybox\" (UID: \"bfaec734-d874-4dcb-b31f-feb87adccfca\") " pod="default/busybox"
	Nov 24 09:05:38 old-k8s-version-128377 kubelet[1521]: I1124 09:05:38.375611    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.866491732 podCreationTimestamp="2025-11-24 09:05:34 +0000 UTC" firstStartedPulling="2025-11-24 09:05:34.853152472 +0000 UTC m=+30.678796027" lastFinishedPulling="2025-11-24 09:05:37.362217947 +0000 UTC m=+33.187861503" observedRunningTime="2025-11-24 09:05:38.375372923 +0000 UTC m=+34.201016485" watchObservedRunningTime="2025-11-24 09:05:38.375557208 +0000 UTC m=+34.201200770"
	
	
	==> storage-provisioner [a9a5857553e67019e47641c1970bb0d5555afd6b608c94a94501dd485efac0c4] <==
	I1124 09:05:32.081185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:05:32.090604       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:05:32.090641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 09:05:32.097885       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:05:32.097963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"742d8911-ea16-4251-8cf0-6f909959732d", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-128377_807761f2-87be-4f83-a3e6-a9218ea13b30 became leader
	I1124 09:05:32.098144       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-128377_807761f2-87be-4f83-a3e6-a9218ea13b30!
	I1124 09:05:32.198942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-128377_807761f2-87be-4f83-a3e6-a9218ea13b30!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-128377 -n old-k8s-version-128377
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-128377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-128377
helpers_test.go:243: (dbg) docker inspect old-k8s-version-128377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7",
	        "Created": "2025-11-24T09:04:51.081869704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696955,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:04:51.124349133Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7/hosts",
	        "LogPath": "/var/lib/docker/containers/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7/2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7-json.log",
	        "Name": "/old-k8s-version-128377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-128377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-128377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2f10becef58704f5e7bd5cb0836d9f1660358d1387d26e05576d2fc9439102c7",
	                "LowerDir": "/var/lib/docker/overlay2/1b1691990697dca2c1039c44453446d25814644b5c2e14c7ed7f94a719a51d83-init/diff:/var/lib/docker/overlay2/a062700147ad5d1f8f2a68f70ba6ad34ea6495dd365bcb265ab17ea27961837b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1b1691990697dca2c1039c44453446d25814644b5c2e14c7ed7f94a719a51d83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1b1691990697dca2c1039c44453446d25814644b5c2e14c7ed7f94a719a51d83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1b1691990697dca2c1039c44453446d25814644b5c2e14c7ed7f94a719a51d83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-128377",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-128377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-128377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-128377",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-128377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1b825735b854737d663311b12a71789ec27a2117f701b1d752b938a4e9f325be",
	            "SandboxKey": "/var/run/docker/netns/1b825735b854",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-128377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5e2ac3220d9f4f0222496592b8e5141116283ec11109477dec7a51401ec91c02",
	                    "EndpointID": "4ad14cff7e04c8fe264f407478b59f88dc3ab8d1c7ab17924a24adb832eca462",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:3f:51:5a:9c:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-128377",
	                        "2f10becef587"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-128377 -n old-k8s-version-128377
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-128377 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-128377 logs -n 25: (1.19868846s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-203355 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                              │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                              │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p missing-upgrade-058813                                                                                                                                                                                                                           │ missing-upgrade-058813 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ ssh     │ -p cilium-203355 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo docker system info                                                                                                                                                                                                            │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo containerd config dump                                                                                                                                                                                                        │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo crio config                                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p cilium-203355                                                                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ start   │ -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ start   │ -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:04:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:04:47.686335  696018 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:04:47.686445  696018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:47.686456  696018 out.go:374] Setting ErrFile to fd 2...
	I1124 09:04:47.686474  696018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:47.686683  696018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:04:47.687133  696018 out.go:368] Setting JSON to false
	I1124 09:04:47.688408  696018 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13624,"bootTime":1763961464,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:04:47.688532  696018 start.go:143] virtualization: kvm guest
	I1124 09:04:47.690354  696018 out.go:179] * [no-preload-820576] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:04:47.691472  696018 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:04:47.691501  696018 notify.go:221] Checking for updates...
	I1124 09:04:47.693590  696018 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:04:47.694681  696018 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:04:47.695683  696018 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:04:47.697109  696018 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:04:47.698248  696018 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:04:47.699807  696018 config.go:182] Loaded profile config "cert-expiration-869306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:04:47.699947  696018 config.go:182] Loaded profile config "kubernetes-upgrade-521313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:47.700091  696018 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:04:47.700236  696018 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:04:47.724639  696018 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:04:47.724770  696018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:47.791833  696018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 09:04:47.780432821 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:47.791998  696018 docker.go:319] overlay module found
	I1124 09:04:47.794089  696018 out.go:179] * Using the docker driver based on user configuration
	I1124 09:04:47.795621  696018 start.go:309] selected driver: docker
	I1124 09:04:47.795639  696018 start.go:927] validating driver "docker" against <nil>
	I1124 09:04:47.795651  696018 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:04:47.796325  696018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:47.859511  696018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 09:04:47.848833175 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:47.859748  696018 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:04:47.859957  696018 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:04:47.861778  696018 out.go:179] * Using Docker driver with root privileges
	I1124 09:04:47.862632  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:04:47.862696  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:04:47.862708  696018 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:04:47.862775  696018 start.go:353] cluster config:
	{Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:04:47.863875  696018 out.go:179] * Starting "no-preload-820576" primary control-plane node in "no-preload-820576" cluster
	I1124 09:04:47.864812  696018 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 09:04:47.865865  696018 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:04:47.866835  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:04:47.866921  696018 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:04:47.866958  696018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:04:47.867001  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json: {Name:mk04f43d651118a00ac1be32029cffb149669d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:47.867208  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:47.890231  696018 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:04:47.890260  696018 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:04:47.890281  696018 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:04:47.890321  696018 start.go:360] acquireMachinesLock for no-preload-820576: {Name:mk6b6fb581999217c645edacaa9c18971e97964f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:47.890432  696018 start.go:364] duration metric: took 88.402µs to acquireMachinesLock for "no-preload-820576"
	I1124 09:04:47.890474  696018 start.go:93] Provisioning new machine with config: &{Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:04:47.890567  696018 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:04:48.739369  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:40906->192.168.76.2:8443: read: connection reset by peer
	I1124 09:04:48.739430  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:48.740184  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:48.920539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:48.921019  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:49.420530  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:49.420996  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:46.813535  695520 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:04:46.813778  695520 start.go:159] libmachine.API.Create for "old-k8s-version-128377" (driver="docker")
	I1124 09:04:46.813816  695520 client.go:173] LocalClient.Create starting
	I1124 09:04:46.813892  695520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:04:46.813936  695520 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:46.813967  695520 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:46.814043  695520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:04:46.814076  695520 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:46.814095  695520 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:46.814441  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:04:46.831913  695520 cli_runner.go:211] docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:04:46.831996  695520 network_create.go:284] running [docker network inspect old-k8s-version-128377] to gather additional debugging logs...
	I1124 09:04:46.832018  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377
	W1124 09:04:46.848875  695520 cli_runner.go:211] docker network inspect old-k8s-version-128377 returned with exit code 1
	I1124 09:04:46.848912  695520 network_create.go:287] error running [docker network inspect old-k8s-version-128377]: docker network inspect old-k8s-version-128377: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-128377 not found
	I1124 09:04:46.848928  695520 network_create.go:289] output of [docker network inspect old-k8s-version-128377]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-128377 not found
	
	** /stderr **
	I1124 09:04:46.849044  695520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:46.866840  695520 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:04:46.867443  695520 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:04:46.868124  695520 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:04:46.868877  695520 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:04:46.869272  695520 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9bf62793deff IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0a:d1:a9:3b:89:29} reservation:<nil>}
	I1124 09:04:46.869983  695520 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5fa0f78c53ad IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:9e:96:d6:0a:fe:a6} reservation:<nil>}
	I1124 09:04:46.870809  695520 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e158e0}
	I1124 09:04:46.870832  695520 network_create.go:124] attempt to create docker network old-k8s-version-128377 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 09:04:46.870880  695520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-128377 old-k8s-version-128377
	I1124 09:04:46.993201  695520 network_create.go:108] docker network old-k8s-version-128377 192.168.103.0/24 created
	I1124 09:04:46.993243  695520 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-128377" container
	I1124 09:04:46.993321  695520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:04:47.015308  695520 cli_runner.go:164] Run: docker volume create old-k8s-version-128377 --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:04:47.034791  695520 oci.go:103] Successfully created a docker volume old-k8s-version-128377
	I1124 09:04:47.034869  695520 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-128377-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --entrypoint /usr/bin/test -v old-k8s-version-128377:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:04:47.772927  695520 oci.go:107] Successfully prepared a docker volume old-k8s-version-128377
	I1124 09:04:47.773023  695520 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:04:47.773041  695520 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:04:47.773133  695520 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-128377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:04:50.987600  695520 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-128377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.214396647s)
	I1124 09:04:50.987639  695520 kic.go:203] duration metric: took 3.214593361s to extract preloaded images to volume ...
	W1124 09:04:50.987789  695520 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:04:50.987849  695520 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:04:50.987920  695520 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:04:51.061728  695520 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-128377 --name old-k8s-version-128377 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-128377 --network old-k8s-version-128377 --ip 192.168.103.2 --volume old-k8s-version-128377:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:04:51.401514  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Running}}
	I1124 09:04:51.426748  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.456228  695520 cli_runner.go:164] Run: docker exec old-k8s-version-128377 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:04:51.515517  695520 oci.go:144] the created container "old-k8s-version-128377" has a running status.
	I1124 09:04:51.515571  695520 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa...
	I1124 09:04:47.893309  696018 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:04:47.893645  696018 start.go:159] libmachine.API.Create for "no-preload-820576" (driver="docker")
	I1124 09:04:47.893687  696018 client.go:173] LocalClient.Create starting
	I1124 09:04:47.893789  696018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:04:47.893833  696018 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:47.893861  696018 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:47.893953  696018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:04:47.893982  696018 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:47.893999  696018 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:47.894436  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:04:47.915789  696018 cli_runner.go:211] docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:04:47.915886  696018 network_create.go:284] running [docker network inspect no-preload-820576] to gather additional debugging logs...
	I1124 09:04:47.915925  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576
	W1124 09:04:47.939725  696018 cli_runner.go:211] docker network inspect no-preload-820576 returned with exit code 1
	I1124 09:04:47.939760  696018 network_create.go:287] error running [docker network inspect no-preload-820576]: docker network inspect no-preload-820576: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-820576 not found
	I1124 09:04:47.939788  696018 network_create.go:289] output of [docker network inspect no-preload-820576]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-820576 not found
	
	** /stderr **
	I1124 09:04:47.939956  696018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:47.960368  696018 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:04:47.961456  696018 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:04:47.962397  696018 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:04:47.963597  696018 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:04:47.964832  696018 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e9cf50}
	I1124 09:04:47.964868  696018 network_create.go:124] attempt to create docker network no-preload-820576 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 09:04:47.964929  696018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-820576 no-preload-820576
	I1124 09:04:48.017684  696018 network_create.go:108] docker network no-preload-820576 192.168.85.0/24 created
	I1124 09:04:48.017725  696018 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-820576" container
	I1124 09:04:48.017804  696018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:04:48.037793  696018 cli_runner.go:164] Run: docker volume create no-preload-820576 --label name.minikube.sigs.k8s.io=no-preload-820576 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:04:48.057638  696018 oci.go:103] Successfully created a docker volume no-preload-820576
	I1124 09:04:48.057738  696018 cli_runner.go:164] Run: docker run --rm --name no-preload-820576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --entrypoint /usr/bin/test -v no-preload-820576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:04:48.192090  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:48.509962  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:48.827547  696018 cache.go:107] acquiring lock: {Name:mkbcabeb5a23ff077ffdad64c71e9fe699d94040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827544  696018 cache.go:107] acquiring lock: {Name:mk92c82896924ab47423467b25ccd98ee4128baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827656  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:04:48.827672  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:04:48.827672  696018 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 138.757µs
	I1124 09:04:48.827689  696018 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:04:48.827683  696018 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 176.678µs
	I1124 09:04:48.827708  696018 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827708  696018 cache.go:107] acquiring lock: {Name:mkf3a006b133f81ed32779d427a8d0a9b25f9000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827735  696018 cache.go:107] acquiring lock: {Name:mkd74819cb24442927f7fb2cffd47478de40e14c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827766  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:04:48.827773  696018 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 69.196µs
	I1124 09:04:48.827780  696018 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:04:48.827788  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:04:48.827796  696018 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 65.204µs
	I1124 09:04:48.827804  696018 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:04:48.827791  696018 cache.go:107] acquiring lock: {Name:mk6b573bbd33cfc3c3f77668030fb064598572fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827820  696018 cache.go:107] acquiring lock: {Name:mk7f052905284f586f4f1cf24b8c34cc48e0b85b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827866  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:04:48.827873  696018 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 57.027µs
	I1124 09:04:48.827882  696018 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827796  696018 cache.go:107] acquiring lock: {Name:mk1d635b72f6d026600360916178f900a450350e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827887  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:04:48.827900  696018 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 115.907µs
	I1124 09:04:48.827910  696018 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:04:48.827914  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:04:48.827921  696018 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 128.45µs
	I1124 09:04:48.827937  696018 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827719  696018 cache.go:107] acquiring lock: {Name:mk8023690ce5b18d9a1789b2f878bf92c1381799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.828021  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:04:48.828033  696018 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 327.502µs
	I1124 09:04:48.828051  696018 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:04:48.828067  696018 cache.go:87] Successfully saved all images to host disk.
	I1124 09:04:50.353018  696018 cli_runner.go:217] Completed: docker run --rm --name no-preload-820576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --entrypoint /usr/bin/test -v no-preload-820576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.295229864s)
	I1124 09:04:50.353061  696018 oci.go:107] Successfully prepared a docker volume no-preload-820576
	I1124 09:04:50.353130  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1124 09:04:50.353205  696018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:04:50.353233  696018 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:04:50.353275  696018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:04:50.412447  696018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-820576 --name no-preload-820576 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-820576 --network no-preload-820576 --ip 192.168.85.2 --volume no-preload-820576:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:04:51.174340  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Running}}
	I1124 09:04:51.195074  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.216706  696018 cli_runner.go:164] Run: docker exec no-preload-820576 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:04:51.270513  696018 oci.go:144] the created container "no-preload-820576" has a running status.
	I1124 09:04:51.270555  696018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa...
	I1124 09:04:51.639069  696018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:04:51.669871  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.693409  696018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:04:51.693441  696018 kic_runner.go:114] Args: [docker exec --privileged no-preload-820576 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:04:51.754414  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.781590  696018 machine.go:94] provisionDockerMachine start ...
	I1124 09:04:51.781685  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:51.808597  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.809054  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:51.809092  696018 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:04:51.963230  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:04:51.963276  696018 ubuntu.go:182] provisioning hostname "no-preload-820576"
	I1124 09:04:51.963339  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:51.984069  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.984406  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:51.984432  696018 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-820576 && echo "no-preload-820576" | sudo tee /etc/hostname
	I1124 09:04:52.142431  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:04:52.142545  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.163141  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.163483  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:52.163520  696018 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820576/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:04:52.313074  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:04:52.313103  696018 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:04:52.313151  696018 ubuntu.go:190] setting up certificates
	I1124 09:04:52.313174  696018 provision.go:84] configureAuth start
	I1124 09:04:52.313241  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.333178  696018 provision.go:143] copyHostCerts
	I1124 09:04:52.333250  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:04:52.333267  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:04:52.333340  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:04:52.333454  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:04:52.333479  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:04:52.333527  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:04:52.333610  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:04:52.333631  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:04:52.333670  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:04:52.333745  696018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.no-preload-820576 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820576]
	I1124 09:04:52.372869  696018 provision.go:177] copyRemoteCerts
	I1124 09:04:52.372936  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:04:52.372984  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.391516  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.495715  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:04:52.515508  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:04:52.533110  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:04:52.549620  696018 provision.go:87] duration metric: took 236.431147ms to configureAuth
	I1124 09:04:52.549643  696018 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:04:52.549785  696018 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:52.549795  696018 machine.go:97] duration metric: took 768.185276ms to provisionDockerMachine
	I1124 09:04:52.549801  696018 client.go:176] duration metric: took 4.656107804s to LocalClient.Create
	I1124 09:04:52.549817  696018 start.go:167] duration metric: took 4.656176839s to libmachine.API.Create "no-preload-820576"
	I1124 09:04:52.549827  696018 start.go:293] postStartSetup for "no-preload-820576" (driver="docker")
	I1124 09:04:52.549837  696018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:04:52.549917  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:04:52.549957  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.567598  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.670209  696018 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:04:52.673794  696018 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:04:52.673819  696018 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:04:52.673829  696018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:04:52.673873  696018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:04:52.673954  696018 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:04:52.674055  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:04:52.681571  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:51.668051  695520 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:04:51.701732  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.724111  695520 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:04:51.724139  695520 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-128377 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:04:51.779671  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.808240  695520 machine.go:94] provisionDockerMachine start ...
	I1124 09:04:51.808514  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:51.833533  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.833868  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:51.833890  695520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:04:51.988683  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-128377
	
	I1124 09:04:51.988712  695520 ubuntu.go:182] provisioning hostname "old-k8s-version-128377"
	I1124 09:04:51.988769  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.008953  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.009275  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:52.009299  695520 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-128377 && echo "old-k8s-version-128377" | sudo tee /etc/hostname
	I1124 09:04:52.164712  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-128377
	
	I1124 09:04:52.164811  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.184388  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.184674  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:52.184701  695520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-128377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-128377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-128377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:04:52.328284  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:04:52.328315  695520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:04:52.328349  695520 ubuntu.go:190] setting up certificates
	I1124 09:04:52.328371  695520 provision.go:84] configureAuth start
	I1124 09:04:52.328437  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.347382  695520 provision.go:143] copyHostCerts
	I1124 09:04:52.347441  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:04:52.347449  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:04:52.347530  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:04:52.347615  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:04:52.347624  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:04:52.347646  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:04:52.347699  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:04:52.347707  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:04:52.347724  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:04:52.347767  695520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-128377 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-128377]
	I1124 09:04:52.449836  695520 provision.go:177] copyRemoteCerts
	I1124 09:04:52.449907  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:04:52.449955  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.467389  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.568756  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:04:52.590911  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:04:52.608291  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:04:52.625476  695520 provision.go:87] duration metric: took 297.076146ms to configureAuth
	I1124 09:04:52.625501  695520 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:04:52.625684  695520 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:04:52.625697  695520 machine.go:97] duration metric: took 817.329123ms to provisionDockerMachine
	I1124 09:04:52.625703  695520 client.go:176] duration metric: took 5.811878386s to LocalClient.Create
	I1124 09:04:52.625724  695520 start.go:167] duration metric: took 5.811947677s to libmachine.API.Create "old-k8s-version-128377"
	I1124 09:04:52.625737  695520 start.go:293] postStartSetup for "old-k8s-version-128377" (driver="docker")
	I1124 09:04:52.625751  695520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:04:52.625805  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:04:52.625861  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.643125  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.746507  695520 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:04:52.750419  695520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:04:52.750446  695520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:04:52.750471  695520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:04:52.750527  695520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:04:52.750621  695520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:04:52.750735  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:04:52.759275  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:52.779524  695520 start.go:296] duration metric: took 153.769147ms for postStartSetup
	I1124 09:04:52.779876  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.797331  695520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/config.json ...
	I1124 09:04:52.797607  695520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:04:52.797652  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.814633  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.914421  695520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:04:52.919231  695520 start.go:128] duration metric: took 6.107446039s to createHost
	I1124 09:04:52.919259  695520 start.go:83] releasing machines lock for "old-k8s-version-128377", held for 6.10762389s
	I1124 09:04:52.919326  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.937920  695520 ssh_runner.go:195] Run: cat /version.json
	I1124 09:04:52.937964  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.937993  695520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:04:52.938073  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.957005  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.957162  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:53.162492  695520 ssh_runner.go:195] Run: systemctl --version
	I1124 09:04:53.168749  695520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:04:53.173128  695520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:04:53.173198  695520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:04:53.196703  695520 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:04:53.196732  695520 start.go:496] detecting cgroup driver to use...
	I1124 09:04:53.196770  695520 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:04:53.196824  695520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:04:53.212821  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:04:53.226105  695520 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:04:53.226149  695520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:04:53.245323  695520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:04:53.261892  695520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:04:53.346225  695520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:04:53.440817  695520 docker.go:234] disabling docker service ...
	I1124 09:04:53.440886  695520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:04:53.466043  695520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:04:53.478621  695520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:04:53.566248  695520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:04:53.652228  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:04:53.665204  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:04:53.679300  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 09:04:53.689354  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:04:53.697996  695520 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:04:53.698043  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:04:53.706349  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.715138  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:04:53.724198  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.732594  695520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:04:53.740362  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:04:53.748766  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:04:53.757048  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:04:53.765265  695520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:04:53.772343  695520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:04:53.779254  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:53.856087  695520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:04:53.959050  695520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:04:53.959110  695520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:04:53.963133  695520 start.go:564] Will wait 60s for crictl version
	I1124 09:04:53.963185  695520 ssh_runner.go:195] Run: which crictl
	I1124 09:04:53.966895  695520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:04:53.994878  695520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:04:53.994934  695520 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.021265  695520 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.045827  695520 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 09:04:52.701569  696018 start.go:296] duration metric: took 151.731915ms for postStartSetup
	I1124 09:04:52.701858  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.719203  696018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:04:52.719424  696018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:04:52.719488  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.736084  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.835481  696018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:04:52.840061  696018 start.go:128] duration metric: took 4.94947332s to createHost
	I1124 09:04:52.840083  696018 start.go:83] releasing machines lock for "no-preload-820576", held for 4.94964132s
	I1124 09:04:52.840148  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.858132  696018 ssh_runner.go:195] Run: cat /version.json
	I1124 09:04:52.858160  696018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:04:52.858222  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.858246  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.877130  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.877482  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.975607  696018 ssh_runner.go:195] Run: systemctl --version
	I1124 09:04:53.031452  696018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:04:53.036065  696018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:04:53.036130  696018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:04:53.059999  696018 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:04:53.060024  696018 start.go:496] detecting cgroup driver to use...
	I1124 09:04:53.060062  696018 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:04:53.060105  696018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:04:53.074505  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:04:53.086089  696018 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:04:53.086143  696018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:04:53.101555  696018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:04:53.118093  696018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:04:53.204201  696018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:04:53.300933  696018 docker.go:234] disabling docker service ...
	I1124 09:04:53.301034  696018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:04:53.320036  696018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:04:53.331959  696018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:04:53.420508  696018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:04:53.513830  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:04:53.526253  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:04:53.540562  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:53.865082  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:04:53.876277  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:04:53.885584  696018 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:04:53.885655  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:04:53.895158  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.904766  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:04:53.913841  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.922747  696018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:04:53.932360  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:04:53.943272  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:04:53.952416  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:04:53.961850  696018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:04:53.969795  696018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:04:53.977270  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:54.067216  696018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:04:54.151776  696018 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:04:54.151849  696018 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:04:54.156309  696018 start.go:564] Will wait 60s for crictl version
	I1124 09:04:54.156367  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:54.160683  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:04:54.187130  696018 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:04:54.187193  696018 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.208524  696018 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.233294  696018 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1124 09:04:49.920675  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:49.921171  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:50.420805  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:50.421212  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:50.920534  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:54.046841  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:54.064168  695520 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:04:54.068915  695520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.079411  695520 kubeadm.go:884] updating cluster {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:04:54.079584  695520 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:04:54.079651  695520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:54.105064  695520 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:04:54.105092  695520 containerd.go:534] Images already preloaded, skipping extraction
	I1124 09:04:54.105153  695520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:54.131723  695520 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:04:54.131746  695520 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:04:54.131756  695520 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 containerd true true} ...
	I1124 09:04:54.131858  695520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-128377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:04:54.131921  695520 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:04:54.160918  695520 cni.go:84] Creating CNI manager for ""
	I1124 09:04:54.160940  695520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:04:54.160955  695520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:04:54.160976  695520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-128377 NodeName:old-k8s-version-128377 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:04:54.161123  695520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-128377"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:04:54.161190  695520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 09:04:54.169102  695520 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:04:54.169150  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:04:54.176962  695520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1124 09:04:54.191252  695520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:04:54.206931  695520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1124 09:04:54.220958  695520 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:04:54.225158  695520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.236116  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:54.319599  695520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:04:54.342135  695520 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377 for IP: 192.168.103.2
	I1124 09:04:54.342157  695520 certs.go:195] generating shared ca certs ...
	I1124 09:04:54.342176  695520 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.342355  695520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:04:54.342406  695520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:04:54.342416  695520 certs.go:257] generating profile certs ...
	I1124 09:04:54.342497  695520 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key
	I1124 09:04:54.342513  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt with IP's: []
	I1124 09:04:54.488402  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt ...
	I1124 09:04:54.488432  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt: {Name:mk87cd521056210340bc5798f0387b3f36dc4635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.488613  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key ...
	I1124 09:04:54.488628  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key: {Name:mk03c81f6da2f2b54dfd9fa0e30866e3372921ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.488712  695520 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1
	I1124 09:04:54.488729  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 09:04:54.543616  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 ...
	I1124 09:04:54.543654  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1: {Name:mk2f5faeeb1a8cba2153625fbd7d3a7e54f95aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.543851  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1 ...
	I1124 09:04:54.543873  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1: {Name:mk7ed4cadcafdc2e1a661255372b702ae6719654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.543964  695520 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt
	I1124 09:04:54.544040  695520 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key
	I1124 09:04:54.544132  695520 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key
	I1124 09:04:54.544150  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt with IP's: []
	I1124 09:04:54.594781  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt ...
	I1124 09:04:54.594837  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt: {Name:mk33ff647329a0bdf714fd27ddf109ec15b6d483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.595015  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key ...
	I1124 09:04:54.595034  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key: {Name:mk9bf52d92c35c053f63b6073f2a38e1ff2182d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.595287  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:04:54.595344  695520 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:04:54.595359  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:04:54.595395  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:04:54.595433  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:04:54.595484  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:04:54.595553  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:54.596350  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:04:54.616384  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:04:54.633998  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:04:54.651552  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:04:54.669737  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 09:04:54.686876  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:04:54.703726  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:04:54.720840  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:04:54.737534  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:04:54.757717  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:04:54.774715  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:04:54.791052  695520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:04:54.802968  695520 ssh_runner.go:195] Run: openssl version
	I1124 09:04:54.808893  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:04:54.816748  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.820220  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.820260  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.854133  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:04:54.862216  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:04:54.870277  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.873860  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.873906  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.910146  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:04:54.919148  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:04:54.927753  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.931870  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.931921  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.972285  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:04:54.981223  695520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:04:54.984999  695520 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:04:54.985067  695520 kubeadm.go:401] StartCluster: {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:04:54.985165  695520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:04:54.985213  695520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:04:55.012874  695520 cri.go:89] found id: ""
	I1124 09:04:55.012940  695520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:04:55.020831  695520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:04:55.029069  695520 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:04:55.029111  695520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:04:55.036334  695520 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:04:55.036348  695520 kubeadm.go:158] found existing configuration files:
	
	I1124 09:04:55.036384  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:04:55.044532  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:04:55.044579  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:04:55.051885  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:04:55.059335  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:04:55.059381  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:04:55.066924  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:04:55.075157  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:04:55.075202  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:04:55.082536  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:04:55.090276  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:04:55.090333  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:04:55.097848  695520 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:04:55.141844  695520 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 09:04:55.142222  695520 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:04:55.176293  695520 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:04:55.176360  695520 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:04:55.176399  695520 kubeadm.go:319] OS: Linux
	I1124 09:04:55.176522  695520 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:04:55.176607  695520 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:04:55.176692  695520 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:04:55.176788  695520 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:04:55.176861  695520 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:04:55.176926  695520 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:04:55.177000  695520 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:04:55.177072  695520 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:04:55.267260  695520 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:04:55.267430  695520 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:04:55.267573  695520 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 09:04:55.406819  695520 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:04:55.408942  695520 out.go:252]   - Generating certificates and keys ...
	I1124 09:04:55.409040  695520 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:04:55.409154  695520 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:04:55.535942  695520 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:04:55.747446  695520 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:04:56.231180  695520 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:04:56.348617  695520 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:04:56.564540  695520 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:04:56.564771  695520 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-128377] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:04:54.234417  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:54.252265  696018 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:04:54.256402  696018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.271173  696018 kubeadm.go:884] updating cluster {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:04:54.271376  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:54.585565  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:54.895614  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:55.213448  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:04:55.213537  696018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:55.248674  696018 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:04:55.248704  696018 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:04:55.248761  696018 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:55.248818  696018 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.248841  696018 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.248860  696018 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.248864  696018 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.248833  696018 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.248841  696018 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.249034  696018 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.250186  696018 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.250215  696018 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:55.250182  696018 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.250186  696018 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.250253  696018 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.250254  696018 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.250188  696018 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.250648  696018 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.411211  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139"
	I1124 09:04:55.411274  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.432666  696018 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:04:55.432717  696018 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.432775  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.436380  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810"
	I1124 09:04:55.436448  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.436570  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.438317  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b"
	I1124 09:04:55.438376  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.445544  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc"
	I1124 09:04:55.445608  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.462611  696018 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:04:55.462672  696018 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.462735  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.466873  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1124 09:04:55.466944  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1124 09:04:55.469707  696018 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:04:55.469760  696018 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.469761  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.469806  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.476188  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.24-0" and sha "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d"
	I1124 09:04:55.476260  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.476601  696018 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:04:55.476645  696018 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.476700  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.476760  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.483510  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46"
	I1124 09:04:55.483571  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.493634  696018 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 09:04:55.493674  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.493687  696018 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.493730  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.504559  696018 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d" in container runtime
	I1124 09:04:55.504599  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.504606  696018 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.504646  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.512866  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.512892  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.512910  696018 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:04:55.512950  696018 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.512990  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.526695  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.526717  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.526785  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.539513  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:04:55.539663  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:55.546674  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.546750  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.546715  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.564076  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.567023  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 09:04:55.567049  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.567061  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1124 09:04:55.567151  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.598524  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.598552  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.598652  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:04:55.598735  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:04:55.614879  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.624975  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:55.625072  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:55.679323  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.684055  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:04:55.684090  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.684124  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.684140  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:04:55.684150  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0
	I1124 09:04:55.684159  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.684160  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1124 09:04:55.684171  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1124 09:04:55.684244  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:55.736024  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 09:04:55.736135  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:55.746073  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.746108  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1124 09:04:55.746157  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:55.746175  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 09:04:55.746191  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (23728640 bytes)
	I1124 09:04:55.746248  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:55.801010  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 09:04:55.801049  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 09:04:55.808405  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.808441  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1124 09:04:55.880897  696018 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:55.880969  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:56.015999  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 09:04:56.068815  696018 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:56.068912  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:56.453297  696018 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1124 09:04:56.453371  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:57.304727  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.24-0: (1.235782073s)
	I1124 09:04:57.304763  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 09:04:57.304794  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:57.304806  696018 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:04:57.304847  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:57.304858  696018 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:57.304920  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:56.768431  695520 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:04:56.768677  695520 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-128377] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:04:57.042517  695520 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:04:57.135211  695520 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:04:57.487492  695520 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:04:57.487607  695520 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:04:57.647815  695520 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:04:57.788032  695520 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:04:58.007063  695520 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:04:58.262043  695520 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:04:58.262616  695520 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:04:58.265868  695520 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:04:55.921561  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:04:55.921607  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:58.266858  695520 out.go:252]   - Booting up control plane ...
	I1124 09:04:58.266989  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:04:58.267065  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:04:58.267746  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:04:58.282824  695520 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:04:58.283699  695520 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:04:58.283773  695520 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:04:58.419897  695520 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 09:04:58.797650  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.492766226s)
	I1124 09:04:58.797672  696018 ssh_runner.go:235] Completed: which crictl: (1.492732478s)
	I1124 09:04:58.797693  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:04:58.797722  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:58.797742  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:58.797763  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:59.494097  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:04:59.494141  696018 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:59.494193  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:59.494314  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:00.636087  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.141861944s)
	I1124 09:05:00.636150  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:05:00.636183  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:05:00.636184  696018 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.141835433s)
	I1124 09:05:00.636272  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:05:00.636277  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:01.829551  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.193240306s)
	I1124 09:05:01.829586  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:05:01.829561  696018 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.193259021s)
	I1124 09:05:01.829618  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:05:01.829656  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:05:01.829661  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:05:01.829741  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:02.922442  695520 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502768 seconds
	I1124 09:05:02.922650  695520 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:05:02.938003  695520 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:05:03.487168  695520 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:05:03.487569  695520 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-128377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:05:03.997647  695520 kubeadm.go:319] [bootstrap-token] Using token: jnao2u.ovlrxqviyhx4po41
	I1124 09:05:03.999063  695520 out.go:252]   - Configuring RBAC rules ...
	I1124 09:05:03.999223  695520 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:05:04.003823  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:05:04.010298  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:05:04.012923  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:05:04.015535  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:05:04.019043  695520 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:05:04.029389  695520 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:05:04.209549  695520 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:05:04.407855  695520 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:05:04.408750  695520 kubeadm.go:319] 
	I1124 09:05:04.408814  695520 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:05:04.408821  695520 kubeadm.go:319] 
	I1124 09:05:04.408930  695520 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:05:04.408949  695520 kubeadm.go:319] 
	I1124 09:05:04.408983  695520 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:05:04.409060  695520 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:05:04.409107  695520 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:05:04.409122  695520 kubeadm.go:319] 
	I1124 09:05:04.409207  695520 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:05:04.409227  695520 kubeadm.go:319] 
	I1124 09:05:04.409283  695520 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:05:04.409289  695520 kubeadm.go:319] 
	I1124 09:05:04.409340  695520 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:05:04.409401  695520 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:05:04.409519  695520 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:05:04.409531  695520 kubeadm.go:319] 
	I1124 09:05:04.409633  695520 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:05:04.409739  695520 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:05:04.409748  695520 kubeadm.go:319] 
	I1124 09:05:04.409856  695520 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jnao2u.ovlrxqviyhx4po41 \
	I1124 09:05:04.409989  695520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:05:04.410028  695520 kubeadm.go:319] 	--control-plane 
	I1124 09:05:04.410043  695520 kubeadm.go:319] 
	I1124 09:05:04.410157  695520 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:05:04.410168  695520 kubeadm.go:319] 
	I1124 09:05:04.410253  695520 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jnao2u.ovlrxqviyhx4po41 \
	I1124 09:05:04.410416  695520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:05:04.412734  695520 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:05:04.412863  695520 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:05:04.412887  695520 cni.go:84] Creating CNI manager for ""
	I1124 09:05:04.412895  695520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:04.414780  695520 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:05:00.922661  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:00.922710  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:04.415630  695520 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:05:04.420099  695520 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 09:05:04.420115  695520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:05:04.433073  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:05:05.091722  695520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:05:05.091870  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-128377 minikube.k8s.io/updated_at=2025_11_24T09_05_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=old-k8s-version-128377 minikube.k8s.io/primary=true
	I1124 09:05:05.092348  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:05.102498  695520 ops.go:34] apiserver oom_adj: -16
	I1124 09:05:05.174868  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:05.675283  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:06.175310  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:02.915588  696018 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.085815853s)
	I1124 09:05:02.915634  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.085954166s)
	I1124 09:05:02.915671  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:05:02.915639  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 09:05:02.915716  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 09:05:02.976753  696018 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:02.976825  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:03.348632  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:05:03.348678  696018 cache_images.go:125] Successfully loaded all cached images
	I1124 09:05:03.348686  696018 cache_images.go:94] duration metric: took 8.099965824s to LoadCachedImages
	I1124 09:05:03.348703  696018 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1124 09:05:03.348825  696018 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:05:03.348894  696018 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:05:03.376137  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:05:03.376168  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:03.376188  696018 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:05:03.376210  696018 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820576 NodeName:no-preload-820576 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:05:03.376350  696018 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-820576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:05:03.376422  696018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:05:03.385368  696018 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 09:05:03.385424  696018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:05:03.394095  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1124 09:05:03.394128  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:05:03.394180  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1124 09:05:03.394191  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 09:05:03.394205  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 09:05:03.394225  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:03.399712  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 09:05:03.399743  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1124 09:05:03.399797  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 09:05:03.399839  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1124 09:05:03.414063  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 09:05:03.448582  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 09:05:03.448623  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1124 09:05:03.941988  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:05:03.950659  696018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1124 09:05:03.964545  696018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:05:03.980698  696018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1124 09:05:03.994370  696018 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:05:03.999682  696018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:05:04.011951  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:04.105068  696018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:04.129581  696018 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576 for IP: 192.168.85.2
	I1124 09:05:04.129609  696018 certs.go:195] generating shared ca certs ...
	I1124 09:05:04.129631  696018 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.129796  696018 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:05:04.129861  696018 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:05:04.129876  696018 certs.go:257] generating profile certs ...
	I1124 09:05:04.129944  696018 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key
	I1124 09:05:04.129964  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt with IP's: []
	I1124 09:05:04.178331  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt ...
	I1124 09:05:04.178368  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt: {Name:mk7a6d48f62cb24db3b80fa6902658a2fab15360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.178586  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key ...
	I1124 09:05:04.178605  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key: {Name:mke761c4ec29e36beccc716dc800bc8fd841e3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.178724  696018 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632
	I1124 09:05:04.178748  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 09:05:04.417670  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 ...
	I1124 09:05:04.417694  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632: {Name:mk59a2d57d772e51aeeeb2a9a4dca760203e6d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.417874  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632 ...
	I1124 09:05:04.417897  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632: {Name:mkdb0be38fd80ef77438b49aa69b9308c6d28ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.418023  696018 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt
	I1124 09:05:04.418147  696018 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key
	I1124 09:05:04.418202  696018 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key
	I1124 09:05:04.418217  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt with IP's: []
	I1124 09:05:04.604435  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt ...
	I1124 09:05:04.604497  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt: {Name:mk5719f2112f16d39272baf4588ce9b65d33d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.604728  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key ...
	I1124 09:05:04.604746  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key: {Name:mk56d8ccc21a879d6506ee3380097e85fb4b4f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.605022  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:05:04.605073  696018 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:05:04.605084  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:05:04.605120  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:05:04.605160  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:05:04.605195  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:05:04.605369  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:05:04.606568  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:05:04.626964  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:05:04.644973  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:05:04.663649  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:05:04.681360  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:05:04.699027  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:05:04.716381  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:05:04.734298  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:05:04.752033  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:05:04.771861  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:05:04.789824  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:05:04.808313  696018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:05:04.826085  696018 ssh_runner.go:195] Run: openssl version
	I1124 09:05:04.834356  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:05:04.843772  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.848660  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.848725  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.887168  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:05:04.897113  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:05:04.907480  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.911694  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.911746  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.951326  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:05:04.961765  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:05:04.972056  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:05:04.976497  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:05:04.976554  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:05:05.017003  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:05:05.027292  696018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:05:05.031547  696018 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:05:05.031616  696018 kubeadm.go:401] StartCluster: {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:05:05.031711  696018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:05:05.031765  696018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:05:05.062044  696018 cri.go:89] found id: ""
	I1124 09:05:05.062126  696018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:05:05.071887  696018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:05:05.082157  696018 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:05:05.082217  696018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:05:05.091225  696018 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:05:05.091248  696018 kubeadm.go:158] found existing configuration files:
	
	I1124 09:05:05.091296  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:05:05.100600  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:05:05.100657  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:05:05.110555  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:05:05.119216  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:05:05.119288  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:05:05.127876  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:05:05.136154  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:05:05.136205  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:05:05.145077  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:05:05.154290  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:05:05.154338  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:05:05.162702  696018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:05:05.200662  696018 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:05:05.200757  696018 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:05:05.269623  696018 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:05:05.269714  696018 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:05:05.269770  696018 kubeadm.go:319] OS: Linux
	I1124 09:05:05.269842  696018 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:05:05.269920  696018 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:05:05.270003  696018 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:05:05.270084  696018 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:05:05.270155  696018 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:05:05.270223  696018 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:05:05.270303  696018 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:05:05.270377  696018 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:05:05.332844  696018 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:05:05.332992  696018 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:05:05.333150  696018 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:05:06.734694  696018 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:05:06.738817  696018 out.go:252]   - Generating certificates and keys ...
	I1124 09:05:06.738929  696018 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:05:06.739072  696018 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:05:06.832143  696018 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:05:06.955015  696018 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:05:07.027143  696018 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:05:07.115762  696018 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:05:07.265716  696018 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:05:07.265857  696018 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-820576] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 09:05:07.364684  696018 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:05:07.364865  696018 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-820576] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 09:05:07.523315  696018 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:05:07.590589  696018 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:05:07.746307  696018 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:05:07.746426  696018 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:05:07.869677  696018 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:05:07.978931  696018 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:05:08.053720  696018 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:05:08.085227  696018 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:05:08.160011  696018 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:05:08.160849  696018 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:05:08.165435  696018 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:05:05.923694  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:05.923742  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:06.675415  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:07.175277  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:07.676031  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.174962  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.675088  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:09.175102  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:09.675096  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:10.175027  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:10.675655  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:11.175703  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.166975  696018 out.go:252]   - Booting up control plane ...
	I1124 09:05:08.167117  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:05:08.167189  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:05:08.167816  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:05:08.183769  696018 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:05:08.183936  696018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:05:08.191856  696018 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:05:08.191990  696018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:05:08.192031  696018 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:05:08.308076  696018 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:05:08.308205  696018 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:05:09.309901  696018 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001908715s
	I1124 09:05:09.316051  696018 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:05:09.316157  696018 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 09:05:09.316247  696018 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:05:09.316315  696018 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:05:10.320869  696018 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004644301s
	I1124 09:05:10.832866  696018 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.516703459s
	I1124 09:05:12.317179  696018 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001080604s
	I1124 09:05:12.331544  696018 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:05:12.339378  696018 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:05:12.347526  696018 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:05:12.347705  696018 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-820576 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:05:12.354657  696018 kubeadm.go:319] [bootstrap-token] Using token: awoygq.wealvtzys3befsou
	I1124 09:05:12.355757  696018 out.go:252]   - Configuring RBAC rules ...
	I1124 09:05:12.355888  696018 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:05:12.359613  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:05:12.364202  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:05:12.366491  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:05:12.369449  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:05:12.371508  696018 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:05:12.722783  696018 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:05:13.137535  696018 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:05:13.723038  696018 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:05:13.724197  696018 kubeadm.go:319] 
	I1124 09:05:13.724302  696018 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:05:13.724317  696018 kubeadm.go:319] 
	I1124 09:05:13.724412  696018 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:05:13.724424  696018 kubeadm.go:319] 
	I1124 09:05:13.724520  696018 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:05:13.724630  696018 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:05:13.724716  696018 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:05:13.724730  696018 kubeadm.go:319] 
	I1124 09:05:13.724818  696018 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:05:13.724831  696018 kubeadm.go:319] 
	I1124 09:05:13.724897  696018 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:05:13.724906  696018 kubeadm.go:319] 
	I1124 09:05:13.724990  696018 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:05:13.725105  696018 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:05:13.725212  696018 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:05:13.725221  696018 kubeadm.go:319] 
	I1124 09:05:13.725338  696018 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:05:13.725493  696018 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:05:13.725510  696018 kubeadm.go:319] 
	I1124 09:05:13.725601  696018 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token awoygq.wealvtzys3befsou \
	I1124 09:05:13.725765  696018 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:05:13.725804  696018 kubeadm.go:319] 	--control-plane 
	I1124 09:05:13.725816  696018 kubeadm.go:319] 
	I1124 09:05:13.725934  696018 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:05:13.725944  696018 kubeadm.go:319] 
	I1124 09:05:13.726041  696018 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token awoygq.wealvtzys3befsou \
	I1124 09:05:13.726243  696018 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:05:13.728504  696018 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:05:13.728661  696018 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:05:13.728704  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:05:13.728716  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:13.730529  696018 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:05:10.924882  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:10.924923  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.109506  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:47578->192.168.76.2:8443: read: connection reset by peer
	I1124 09:05:11.421112  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.421646  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:11.920950  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.921496  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:12.421219  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:12.421692  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:12.921430  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:12.921911  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:13.420431  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:13.420926  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:13.920542  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:13.921060  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:14.420434  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:14.420859  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:11.675776  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:12.175192  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:12.675267  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.175941  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.675281  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.175267  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.675185  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.175391  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.675966  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.175887  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.675144  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.175281  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.260591  695520 kubeadm.go:1114] duration metric: took 12.168846115s to wait for elevateKubeSystemPrivileges
	I1124 09:05:17.260625  695520 kubeadm.go:403] duration metric: took 22.275566194s to StartCluster
	I1124 09:05:17.260655  695520 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:17.260738  695520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:05:17.261860  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:17.262121  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:05:17.262124  695520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:05:17.262197  695520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:05:17.262308  695520 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-128377"
	I1124 09:05:17.262334  695520 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-128377"
	I1124 09:05:17.262358  695520 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:05:17.262376  695520 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:05:17.262351  695520 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-128377"
	I1124 09:05:17.262443  695520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-128377"
	I1124 09:05:17.262844  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.263075  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.263365  695520 out.go:179] * Verifying Kubernetes components...
	I1124 09:05:17.264408  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:17.287510  695520 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-128377"
	I1124 09:05:17.287559  695520 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:05:17.287978  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.288769  695520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:13.732137  696018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:05:13.737711  696018 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1124 09:05:13.737726  696018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:05:13.752118  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:05:13.951744  696018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:05:13.951795  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.951847  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-820576 minikube.k8s.io/updated_at=2025_11_24T09_05_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=no-preload-820576 minikube.k8s.io/primary=true
	I1124 09:05:13.962047  696018 ops.go:34] apiserver oom_adj: -16
	I1124 09:05:14.022754  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.523671  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.023231  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.523083  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.023230  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.523666  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.022940  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.523444  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.290230  695520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:17.290253  695520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:05:17.290314  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:05:17.317679  695520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:17.317704  695520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:05:17.317768  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:05:17.319048  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:05:17.343853  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:05:17.366525  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:05:17.411998  695520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:17.447003  695520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:17.463082  695520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:17.632983  695520 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 09:05:17.634312  695520 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:05:17.888856  695520 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:05:18.022851  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:18.523601  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:18.589169  696018 kubeadm.go:1114] duration metric: took 4.637423043s to wait for elevateKubeSystemPrivileges
	I1124 09:05:18.589209  696018 kubeadm.go:403] duration metric: took 13.557597169s to StartCluster
	I1124 09:05:18.589237  696018 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:18.589321  696018 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:05:18.590747  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:18.590988  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:05:18.591000  696018 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:05:18.591095  696018 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:05:18.591206  696018 addons.go:70] Setting storage-provisioner=true in profile "no-preload-820576"
	I1124 09:05:18.591219  696018 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:05:18.591236  696018 addons.go:239] Setting addon storage-provisioner=true in "no-preload-820576"
	I1124 09:05:18.591251  696018 addons.go:70] Setting default-storageclass=true in profile "no-preload-820576"
	I1124 09:05:18.591275  696018 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:05:18.591283  696018 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820576"
	I1124 09:05:18.591664  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.591855  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.592299  696018 out.go:179] * Verifying Kubernetes components...
	I1124 09:05:18.593599  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:18.615163  696018 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:18.615451  696018 addons.go:239] Setting addon default-storageclass=true in "no-preload-820576"
	I1124 09:05:18.615530  696018 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:05:18.615851  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.616223  696018 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:18.616245  696018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:05:18.616301  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:05:18.646443  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:05:18.647885  696018 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:18.647963  696018 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:05:18.648059  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:05:18.675529  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:05:18.685797  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:05:18.752704  696018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:18.775922  696018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:18.800792  696018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:18.878758  696018 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 09:05:18.880873  696018 node_ready.go:35] waiting up to 6m0s for node "no-preload-820576" to be "Ready" ...
	I1124 09:05:19.096304  696018 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:05:14.921188  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:14.921633  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:15.421327  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:15.421818  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:15.920573  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:15.921034  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:16.421282  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:16.421841  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:16.921386  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:16.921942  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.420551  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:17.421007  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.920666  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:17.921181  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:18.420539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:18.421011  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:18.920611  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:18.921079  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:19.420539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:19.421004  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.889849  695520 addons.go:530] duration metric: took 627.656763ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:05:18.137738  695520 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-128377" context rescaled to 1 replicas
	W1124 09:05:19.637948  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	I1124 09:05:19.097398  696018 addons.go:530] duration metric: took 506.310963ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:05:19.383938  696018 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-820576" context rescaled to 1 replicas
	W1124 09:05:20.884989  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:19.920806  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:19.921207  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:20.420831  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:20.421312  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:20.920613  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:20.921185  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:21.420832  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:21.421240  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:21.920531  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:21.921019  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:22.420552  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:05:21.638057  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:23.638668  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:26.137883  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:23.383937  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:25.384443  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:27.421276  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:27.421318  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:05:28.138098  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:30.638120  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:27.884284  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:29.884474  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:32.384199  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:31.637332  695520 node_ready.go:49] node "old-k8s-version-128377" is "Ready"
	I1124 09:05:31.637368  695520 node_ready.go:38] duration metric: took 14.003009675s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:05:31.637385  695520 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:05:31.637443  695520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:05:31.650126  695520 api_server.go:72] duration metric: took 14.387953281s to wait for apiserver process to appear ...
	I1124 09:05:31.650156  695520 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:05:31.650179  695520 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:05:31.654078  695520 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:05:31.655253  695520 api_server.go:141] control plane version: v1.28.0
	I1124 09:05:31.655280  695520 api_server.go:131] duration metric: took 5.117021ms to wait for apiserver health ...
	I1124 09:05:31.655289  695520 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:05:31.658830  695520 system_pods.go:59] 8 kube-system pods found
	I1124 09:05:31.658868  695520 system_pods.go:61] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.658877  695520 system_pods.go:61] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.658889  695520 system_pods.go:61] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.658895  695520 system_pods.go:61] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.658906  695520 system_pods.go:61] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.658910  695520 system_pods.go:61] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.658916  695520 system_pods.go:61] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.658921  695520 system_pods.go:61] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.658927  695520 system_pods.go:74] duration metric: took 3.632262ms to wait for pod list to return data ...
	I1124 09:05:31.658936  695520 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:05:31.660923  695520 default_sa.go:45] found service account: "default"
	I1124 09:05:31.660942  695520 default_sa.go:55] duration metric: took 2.000088ms for default service account to be created ...
	I1124 09:05:31.660950  695520 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:05:31.664223  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:31.664263  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.664272  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.664280  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.664284  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.664287  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.664291  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.664294  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.664300  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.664333  695520 retry.go:31] will retry after 195.108791ms: missing components: kube-dns
	I1124 09:05:31.863438  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:31.863494  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.863505  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.863515  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.863520  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.863525  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.863528  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.863540  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.863557  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.863579  695520 retry.go:31] will retry after 244.252087ms: missing components: kube-dns
	I1124 09:05:32.111547  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:32.111586  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:32.111595  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:32.111603  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:32.111608  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:32.111614  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:32.111628  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:32.111634  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:32.111641  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:32.111660  695520 retry.go:31] will retry after 471.342676ms: missing components: kube-dns
	I1124 09:05:32.587354  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:32.587384  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Running
	I1124 09:05:32.587389  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:32.587393  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:32.587397  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:32.587402  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:32.587405  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:32.587408  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:32.587411  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Running
	I1124 09:05:32.587420  695520 system_pods.go:126] duration metric: took 926.463548ms to wait for k8s-apps to be running ...
	I1124 09:05:32.587428  695520 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:05:32.587503  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:32.602305  695520 system_svc.go:56] duration metric: took 14.864147ms WaitForService to wait for kubelet
	I1124 09:05:32.602336  695520 kubeadm.go:587] duration metric: took 15.340181249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:05:32.602385  695520 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:05:32.605212  695520 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:05:32.605242  695520 node_conditions.go:123] node cpu capacity is 8
	I1124 09:05:32.605271  695520 node_conditions.go:105] duration metric: took 2.87532ms to run NodePressure ...
	I1124 09:05:32.605293  695520 start.go:242] waiting for startup goroutines ...
	I1124 09:05:32.605308  695520 start.go:247] waiting for cluster config update ...
	I1124 09:05:32.605327  695520 start.go:256] writing updated cluster config ...
	I1124 09:05:32.605690  695520 ssh_runner.go:195] Run: rm -f paused
	I1124 09:05:32.610319  695520 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:32.614557  695520 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.619322  695520 pod_ready.go:94] pod "coredns-5dd5756b68-vxxnm" is "Ready"
	I1124 09:05:32.619349  695520 pod_ready.go:86] duration metric: took 4.765973ms for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.622417  695520 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.626873  695520 pod_ready.go:94] pod "etcd-old-k8s-version-128377" is "Ready"
	I1124 09:05:32.626900  695520 pod_ready.go:86] duration metric: took 4.45394ms for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.629800  695520 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.634310  695520 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-128377" is "Ready"
	I1124 09:05:32.634338  695520 pod_ready.go:86] duration metric: took 4.514426ms for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.637382  695520 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.015375  695520 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-128377" is "Ready"
	I1124 09:05:33.015406  695520 pod_ready.go:86] duration metric: took 378.000797ms for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.215146  695520 pod_ready.go:83] waiting for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.614362  695520 pod_ready.go:94] pod "kube-proxy-fpbs2" is "Ready"
	I1124 09:05:33.614392  695520 pod_ready.go:86] duration metric: took 399.215049ms for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.815166  695520 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.214969  695520 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-128377" is "Ready"
	I1124 09:05:34.214999  695520 pod_ready.go:86] duration metric: took 399.806564ms for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.215011  695520 pod_ready.go:40] duration metric: took 1.604660669s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:34.261989  695520 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:05:34.263612  695520 out.go:203] 
	W1124 09:05:34.264723  695520 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:05:34.265770  695520 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:05:34.267170  695520 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-128377" cluster and "default" namespace by default
	I1124 09:05:32.422898  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:32.423021  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:05:32.423106  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:05:32.453902  685562 cri.go:89] found id: "1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:05:32.453922  685562 cri.go:89] found id: "4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	I1124 09:05:32.453927  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:05:32.453929  685562 cri.go:89] found id: ""
	I1124 09:05:32.453937  685562 logs.go:282] 3 containers: [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:05:32.454000  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.458469  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.462439  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.466262  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:05:32.466335  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:05:32.496086  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:05:32.496112  685562 cri.go:89] found id: ""
	I1124 09:05:32.496122  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:05:32.496186  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.500443  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:05:32.500532  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:05:32.528567  685562 cri.go:89] found id: ""
	I1124 09:05:32.528602  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.528610  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:05:32.528617  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:05:32.528677  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:05:32.557355  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:05:32.557375  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:05:32.557379  685562 cri.go:89] found id: ""
	I1124 09:05:32.557388  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:05:32.557445  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.561666  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.565691  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:05:32.565776  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:05:32.594818  685562 cri.go:89] found id: ""
	I1124 09:05:32.594841  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.594848  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:05:32.594855  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:05:32.594900  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:05:32.625049  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:05:32.625068  685562 cri.go:89] found id: "87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0"
	I1124 09:05:32.625073  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:05:32.625078  685562 cri.go:89] found id: ""
	I1124 09:05:32.625087  685562 logs.go:282] 3 containers: [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d 87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:05:32.625142  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.630042  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.634965  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.639315  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:05:32.639376  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:05:32.669355  685562 cri.go:89] found id: ""
	I1124 09:05:32.669384  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.669392  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:05:32.669398  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:05:32.669449  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:05:32.697559  685562 cri.go:89] found id: ""
	I1124 09:05:32.697586  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.697596  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:05:32.697610  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:05:32.697645  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:05:32.736120  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:05:32.736153  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:05:32.768484  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:05:32.768526  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:05:32.836058  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:05:32.836100  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:05:32.853541  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:05:32.853613  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 09:05:33.384739  696018 node_ready.go:49] node "no-preload-820576" is "Ready"
	I1124 09:05:33.384778  696018 node_ready.go:38] duration metric: took 14.503869435s for node "no-preload-820576" to be "Ready" ...
	I1124 09:05:33.384797  696018 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:05:33.384861  696018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:05:33.401268  696018 api_server.go:72] duration metric: took 14.81022929s to wait for apiserver process to appear ...
	I1124 09:05:33.401299  696018 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:05:33.401324  696018 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 09:05:33.406015  696018 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 09:05:33.407175  696018 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:05:33.407215  696018 api_server.go:131] duration metric: took 5.908148ms to wait for apiserver health ...
	I1124 09:05:33.407226  696018 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:05:33.410293  696018 system_pods.go:59] 8 kube-system pods found
	I1124 09:05:33.410331  696018 system_pods.go:61] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.410338  696018 system_pods.go:61] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.410346  696018 system_pods.go:61] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.410352  696018 system_pods.go:61] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.410360  696018 system_pods.go:61] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.410365  696018 system_pods.go:61] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.410369  696018 system_pods.go:61] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.410382  696018 system_pods.go:61] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.410391  696018 system_pods.go:74] duration metric: took 3.156993ms to wait for pod list to return data ...
	I1124 09:05:33.410403  696018 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:05:33.413158  696018 default_sa.go:45] found service account: "default"
	I1124 09:05:33.413182  696018 default_sa.go:55] duration metric: took 2.772178ms for default service account to be created ...
	I1124 09:05:33.413192  696018 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:05:33.416818  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.416849  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.416856  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.416863  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.416868  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.416874  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.416879  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.416884  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.416891  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.416935  696018 retry.go:31] will retry after 275.944352ms: missing components: kube-dns
	I1124 09:05:33.697203  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.697247  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.697259  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.697269  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.697274  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.697285  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.697290  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.697297  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.697304  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.697327  696018 retry.go:31] will retry after 278.68714ms: missing components: kube-dns
	I1124 09:05:33.979933  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.979971  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.979977  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.979984  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.979987  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.979991  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.979994  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.979998  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.980003  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.980020  696018 retry.go:31] will retry after 448.083964ms: missing components: kube-dns
	I1124 09:05:34.432301  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:34.432341  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Running
	I1124 09:05:34.432350  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:34.432355  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:34.432362  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:34.432369  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:34.432374  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:34.432379  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:34.432384  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Running
	I1124 09:05:34.432395  696018 system_pods.go:126] duration metric: took 1.019195458s to wait for k8s-apps to be running ...
	I1124 09:05:34.432410  696018 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:05:34.432534  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:34.451401  696018 system_svc.go:56] duration metric: took 18.978773ms WaitForService to wait for kubelet
	I1124 09:05:34.451444  696018 kubeadm.go:587] duration metric: took 15.860405681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:05:34.451483  696018 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:05:34.454386  696018 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:05:34.454410  696018 node_conditions.go:123] node cpu capacity is 8
	I1124 09:05:34.454427  696018 node_conditions.go:105] duration metric: took 2.938205ms to run NodePressure ...
	I1124 09:05:34.454440  696018 start.go:242] waiting for startup goroutines ...
	I1124 09:05:34.454450  696018 start.go:247] waiting for cluster config update ...
	I1124 09:05:34.454478  696018 start.go:256] writing updated cluster config ...
	I1124 09:05:34.454771  696018 ssh_runner.go:195] Run: rm -f paused
	I1124 09:05:34.459160  696018 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:34.462567  696018 pod_ready.go:83] waiting for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.466303  696018 pod_ready.go:94] pod "coredns-7d764666f9-b6dpn" is "Ready"
	I1124 09:05:34.466324  696018 pod_ready.go:86] duration metric: took 3.738029ms for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.468156  696018 pod_ready.go:83] waiting for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.471750  696018 pod_ready.go:94] pod "etcd-no-preload-820576" is "Ready"
	I1124 09:05:34.471775  696018 pod_ready.go:86] duration metric: took 3.597676ms for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.473507  696018 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.477092  696018 pod_ready.go:94] pod "kube-apiserver-no-preload-820576" is "Ready"
	I1124 09:05:34.477115  696018 pod_ready.go:86] duration metric: took 3.588223ms for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.478724  696018 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.862953  696018 pod_ready.go:94] pod "kube-controller-manager-no-preload-820576" is "Ready"
	I1124 09:05:34.862977  696018 pod_ready.go:86] duration metric: took 384.235741ms for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.063039  696018 pod_ready.go:83] waiting for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.463183  696018 pod_ready.go:94] pod "kube-proxy-vz24l" is "Ready"
	I1124 09:05:35.463217  696018 pod_ready.go:86] duration metric: took 400.149042ms for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.664151  696018 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:36.063590  696018 pod_ready.go:94] pod "kube-scheduler-no-preload-820576" is "Ready"
	I1124 09:05:36.063619  696018 pod_ready.go:86] duration metric: took 399.441074ms for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:36.063632  696018 pod_ready.go:40] duration metric: took 1.604443296s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:36.110852  696018 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:05:36.112796  696018 out.go:179] * Done! kubectl is now configured to use "no-preload-820576" cluster and "default" namespace by default
	I1124 09:05:43.195573  685562 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.341935277s)
	W1124 09:05:43.195644  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:44544->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:44544->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1124 09:05:43.195660  685562 logs.go:123] Gathering logs for kube-apiserver [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365] ...
	I1124 09:05:43.195679  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:05:43.229092  685562 logs.go:123] Gathering logs for kube-apiserver [4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680] ...
	I1124 09:05:43.229122  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	W1124 09:05:43.256709  685562 logs.go:130] failed kube-apiserver [4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:05:43.254237    2218 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found" containerID="4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	time="2025-11-24T09:05:43Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found"
	 output: 
	** stderr ** 
	E1124 09:05:43.254237    2218 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found" containerID="4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	time="2025-11-24T09:05:43Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found"
	
	** /stderr **
	I1124 09:05:43.256732  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:05:43.256745  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:05:43.296899  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:05:43.296933  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:05:43.327780  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:05:43.327805  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:05:43.363107  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:05:43.363150  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:05:43.395896  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:05:43.395929  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:05:43.423650  685562 logs.go:123] Gathering logs for kube-controller-manager [87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0] ...
	I1124 09:05:43.423680  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0"
	I1124 09:05:43.453581  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:05:43.453608  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	92908e44718b7       56cc512116c8f       9 seconds ago       Running             busybox                   0                   1ee15af433557       busybox                                          default
	a7a841ea7303a       ead0a4a53df89       14 seconds ago      Running             coredns                   0                   5cd1e9dd6b4b4       coredns-5dd5756b68-vxxnm                         kube-system
	a9a5857553e67       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   6128b1854bc49       storage-provisioner                              kube-system
	818537e08c060       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   cd819a24f784f       kindnet-gbp66                                    kube-system
	370631aaaf577       ea1030da44aa1       28 seconds ago      Running             kube-proxy                0                   17a629fbc9de7       kube-proxy-fpbs2                                 kube-system
	f5eddecfb179f       f6f496300a2ae       47 seconds ago      Running             kube-scheduler            0                   d4658a7b318ec       kube-scheduler-old-k8s-version-128377            kube-system
	5d9ec22e03b8b       4be79c38a4bab       47 seconds ago      Running             kube-controller-manager   0                   f3a2eced02a3b       kube-controller-manager-old-k8s-version-128377   kube-system
	842bd9db2d84b       bb5e0dde9054c       47 seconds ago      Running             kube-apiserver            0                   879c975eb1a53       kube-apiserver-old-k8s-version-128377            kube-system
	8df3112d99751       73deb9a3f7025       47 seconds ago      Running             etcd                      0                   78f7483f85b14       etcd-old-k8s-version-128377                      kube-system
	
	
	==> containerd <==
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.013913791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vxxnm,Uid:b84bae0f-9f75-4d1c-b2ed-da0c10a141cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cd1e9dd6b4b4d2ac225fd496f6fac6cfc490bdb385b217119ffd695f763abf3\""
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.016899714Z" level=info msg="CreateContainer within sandbox \"5cd1e9dd6b4b4d2ac225fd496f6fac6cfc490bdb385b217119ffd695f763abf3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.024116931Z" level=info msg="Container a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.030290587Z" level=info msg="CreateContainer within sandbox \"5cd1e9dd6b4b4d2ac225fd496f6fac6cfc490bdb385b217119ffd695f763abf3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5\""
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.030773995Z" level=info msg="StartContainer for \"a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5\""
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.031567693Z" level=info msg="connecting to shim a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5" address="unix:///run/containerd/s/7e80e31b141e93e01901781df29b4edcac7d62ec3fd02a2cc1cde1ffde438980" protocol=ttrpc version=3
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.070950416Z" level=info msg="StartContainer for \"a9a5857553e67019e47641c1970bb0d5555afd6b608c94a94501dd485efac0c4\" returns successfully"
	Nov 24 09:05:32 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:32.075707267Z" level=info msg="StartContainer for \"a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5\" returns successfully"
	Nov 24 09:05:34 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:34.747845169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bfaec734-d874-4dcb-b31f-feb87adccfca,Namespace:default,Attempt:0,}"
	Nov 24 09:05:34 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:34.786693345Z" level=info msg="connecting to shim 1ee15af4335571d5c2c1f8cf460b21232bfc82973349a4c00a86f5a2545492a2" address="unix:///run/containerd/s/b51cd8663d01a7c675d7f65aecc44f4b6281e3382088734fe56170e879775890" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 09:05:34 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:34.851781414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bfaec734-d874-4dcb-b31f-feb87adccfca,Namespace:default,Attempt:0,} returns sandbox id \"1ee15af4335571d5c2c1f8cf460b21232bfc82973349a4c00a86f5a2545492a2\""
	Nov 24 09:05:34 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:34.853515051Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.357982384Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.358604580Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.359790616Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.361443799Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.361898949Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.508337162s"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.361934177Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.363533599Z" level=info msg="CreateContainer within sandbox \"1ee15af4335571d5c2c1f8cf460b21232bfc82973349a4c00a86f5a2545492a2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.369396201Z" level=info msg="Container 92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.374660363Z" level=info msg="CreateContainer within sandbox \"1ee15af4335571d5c2c1f8cf460b21232bfc82973349a4c00a86f5a2545492a2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9\""
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.375182989Z" level=info msg="StartContainer for \"92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9\""
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.376051696Z" level=info msg="connecting to shim 92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9" address="unix:///run/containerd/s/b51cd8663d01a7c675d7f65aecc44f4b6281e3382088734fe56170e879775890" protocol=ttrpc version=3
	Nov 24 09:05:37 old-k8s-version-128377 containerd[661]: time="2025-11-24T09:05:37.425776823Z" level=info msg="StartContainer for \"92908e44718b76213a4fd87e310efd757d73940a581879283782328fd7a0dfa9\" returns successfully"
	Nov 24 09:05:43 old-k8s-version-128377 containerd[661]: E1124 09:05:43.526421     661 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54326 - 65005 "HINFO IN 6565264189616162908.3935264129304859187. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029224592s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-128377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-128377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=old-k8s-version-128377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_05_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-128377
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:05:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:05:35 +0000   Mon, 24 Nov 2025 09:05:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:05:35 +0000   Mon, 24 Nov 2025 09:05:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:05:35 +0000   Mon, 24 Nov 2025 09:05:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:05:35 +0000   Mon, 24 Nov 2025 09:05:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-128377
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                220a6d4b-4a36-435b-ad8f-2d418f4618a1
	  Boot ID:                    f052cd47-57de-4521-b5fb-139979fdced9
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-vxxnm                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-128377                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-gbp66                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-128377             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-128377    200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-fpbs2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-128377             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-128377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-128377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node old-k8s-version-128377 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-128377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-128377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-128377 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-128377 event: Registered Node old-k8s-version-128377 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-128377 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [8df3112d99751cf0ed66add055e0df50e3c944dbb66b787e2e3ae37efbec7d4e] <==
	{"level":"info","ts":"2025-11-24T09:05:00.107581Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T09:05:00.107626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:05:00.107753Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:05:00.10778Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T09:05:00.10887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T09:05:00.108869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-24T09:05:01.710895Z","caller":"traceutil/trace.go:171","msg":"trace[1442253581] transaction","detail":"{read_only:false; response_revision:20; number_of_response:1; }","duration":"170.61339ms","start":"2025-11-24T09:05:01.540258Z","end":"2025-11-24T09:05:01.710871Z","steps":["trace[1442253581] 'process raft request'  (duration: 170.544438ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711011Z","caller":"traceutil/trace.go:171","msg":"trace[699662152] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"172.264745ms","start":"2025-11-24T09:05:01.538726Z","end":"2025-11-24T09:05:01.710991Z","steps":["trace[699662152] 'process raft request'  (duration: 172.04013ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:05:01.711031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.576061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-9x9d8\" ","response":"range_response_count:1 size:895"}
	{"level":"info","ts":"2025-11-24T09:05:01.710896Z","caller":"traceutil/trace.go:171","msg":"trace[1006472868] transaction","detail":"{read_only:false; response_revision:18; number_of_response:1; }","duration":"172.691781ms","start":"2025-11-24T09:05:01.538162Z","end":"2025-11-24T09:05:01.710854Z","steps":["trace[1006472868] 'process raft request'  (duration: 109.125575ms)","trace[1006472868] 'compare'  (duration: 63.355357ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:05:01.710915Z","caller":"traceutil/trace.go:171","msg":"trace[981263403] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"170.391166ms","start":"2025-11-24T09:05:01.540518Z","end":"2025-11-24T09:05:01.710909Z","steps":["trace[981263403] 'process raft request'  (duration: 170.307811ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711086Z","caller":"traceutil/trace.go:171","msg":"trace[1918024405] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-9x9d8; range_end:; response_count:1; response_revision:22; }","duration":"172.654948ms","start":"2025-11-24T09:05:01.538422Z","end":"2025-11-24T09:05:01.711077Z","steps":["trace[1918024405] 'agreement among raft nodes before linearized reading'  (duration: 172.512588ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.710914Z","caller":"traceutil/trace.go:171","msg":"trace[1488131719] linearizableReadLoop","detail":"{readStateIndex:22; appliedIndex:18; }","duration":"172.460174ms","start":"2025-11-24T09:05:01.53844Z","end":"2025-11-24T09:05:01.7109Z","steps":["trace[1488131719] 'read index received'  (duration: 25.895675ms)","trace[1488131719] 'applied index is now lower than readState.Index'  (duration: 146.559971ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:05:01.711054Z","caller":"traceutil/trace.go:171","msg":"trace[1678514513] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"149.8797ms","start":"2025-11-24T09:05:01.561163Z","end":"2025-11-24T09:05:01.711042Z","steps":["trace[1678514513] 'process raft request'  (duration: 149.700045ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711435Z","caller":"traceutil/trace.go:171","msg":"trace[2085549652] transaction","detail":"{read_only:false; response_revision:23; number_of_response:1; }","duration":"144.831606ms","start":"2025-11-24T09:05:01.566593Z","end":"2025-11-24T09:05:01.711425Z","steps":["trace[2085549652] 'process raft request'  (duration: 144.652194ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711454Z","caller":"traceutil/trace.go:171","msg":"trace[1776690454] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"143.564662ms","start":"2025-11-24T09:05:01.567876Z","end":"2025-11-24T09:05:01.71144Z","steps":["trace[1776690454] 'process raft request'  (duration: 143.429904ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:05:01.711724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.213558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-11-24T09:05:01.711757Z","caller":"traceutil/trace.go:171","msg":"trace[366826393] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:25; }","duration":"146.253881ms","start":"2025-11-24T09:05:01.565494Z","end":"2025-11-24T09:05:01.711748Z","steps":["trace[366826393] 'agreement among raft nodes before linearized reading'  (duration: 146.18478ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:01.711931Z","caller":"traceutil/trace.go:171","msg":"trace[1923893862] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"137.068438ms","start":"2025-11-24T09:05:01.574851Z","end":"2025-11-24T09:05:01.711919Z","steps":["trace[1923893862] 'process raft request'  (duration: 136.481982ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:05:01.712125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.955875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T09:05:01.712163Z","caller":"traceutil/trace.go:171","msg":"trace[90940555] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:25; }","duration":"172.012061ms","start":"2025-11-24T09:05:01.54014Z","end":"2025-11-24T09:05:01.712153Z","steps":["trace[90940555] 'agreement among raft nodes before linearized reading'  (duration: 171.930715ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T09:05:01.714609Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.250502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-128377\" ","response":"range_response_count:1 size:3558"}
	{"level":"info","ts":"2025-11-24T09:05:01.714708Z","caller":"traceutil/trace.go:171","msg":"trace[322045522] range","detail":"{range_begin:/registry/minions/old-k8s-version-128377; range_end:; response_count:1; response_revision:25; }","duration":"175.353553ms","start":"2025-11-24T09:05:01.539338Z","end":"2025-11-24T09:05:01.714691Z","steps":["trace[322045522] 'agreement among raft nodes before linearized reading'  (duration: 172.031487ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:03.559324Z","caller":"traceutil/trace.go:171","msg":"trace[627044044] transaction","detail":"{read_only:false; response_revision:204; number_of_response:1; }","duration":"100.594994ms","start":"2025-11-24T09:05:03.458371Z","end":"2025-11-24T09:05:03.558966Z","steps":["trace[627044044] 'process raft request'  (duration: 98.72439ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:05:11.43815Z","caller":"traceutil/trace.go:171","msg":"trace[324713988] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"136.243687ms","start":"2025-11-24T09:05:11.301878Z","end":"2025-11-24T09:05:11.438122Z","steps":["trace[324713988] 'process raft request'  (duration: 135.577137ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:05:46 up  3:48,  0 user,  load average: 4.43, 3.43, 10.79
	Linux old-k8s-version-128377 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [818537e08c0605796949e72c73a034b7d5f104ce598d4a12f0ed8bf30de9c646] <==
	I1124 09:05:21.342277       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:05:21.342547       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 09:05:21.342705       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:05:21.342728       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:05:21.342756       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:05:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:05:21.545109       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:05:21.545137       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:05:21.545150       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:05:21.545827       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:05:22.046295       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:05:22.046329       1 metrics.go:72] Registering metrics
	I1124 09:05:22.046391       1 controller.go:711] "Syncing nftables rules"
	I1124 09:05:31.547663       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 09:05:31.547728       1 main.go:301] handling current node
	I1124 09:05:41.547315       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 09:05:41.547363       1 main.go:301] handling current node
	
	
	==> kube-apiserver [842bd9db2d84b65b054e4b006bfb9c11b98ac3cdcbe13cd821183480cd046d8a] <==
	I1124 09:05:01.506809       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 09:05:01.506838       1 aggregator.go:166] initial CRD sync complete...
	I1124 09:05:01.506846       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 09:05:01.506863       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 09:05:01.506869       1 cache.go:39] Caches are synced for autoregister controller
	I1124 09:05:01.508109       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 09:05:01.508757       1 shared_informer.go:318] Caches are synced for configmaps
	E1124 09:05:01.537227       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 09:05:01.741694       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:05:02.411561       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:05:02.415133       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:05:02.415155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:05:02.826831       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:05:02.865354       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:05:02.945781       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:05:02.951178       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 09:05:02.952085       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 09:05:02.955858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:05:03.457945       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 09:05:04.197911       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 09:05:04.208245       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:05:04.218442       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 09:05:17.015236       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 09:05:17.165046       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:05:17.165047       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5d9ec22e03b8b0446d34a5b300037519eb0aa0be6b1e6c451907abb271f71839] <==
	I1124 09:05:16.510194       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-128377"
	I1124 09:05:16.510252       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 09:05:16.516579       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 09:05:16.831807       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:05:16.890844       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 09:05:16.890883       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 09:05:17.019027       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 09:05:17.175390       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gbp66"
	I1124 09:05:17.176958       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fpbs2"
	I1124 09:05:17.325895       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vxxnm"
	I1124 09:05:17.332721       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-x5sl2"
	I1124 09:05:17.343264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="324.364712ms"
	I1124 09:05:17.351654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.320995ms"
	I1124 09:05:17.351793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.483µs"
	I1124 09:05:17.672071       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 09:05:17.682409       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-x5sl2"
	I1124 09:05:17.690482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.456609ms"
	I1124 09:05:17.698725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.176655ms"
	I1124 09:05:17.698851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.584µs"
	I1124 09:05:31.598337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.212µs"
	I1124 09:05:31.631586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.266µs"
	I1124 09:05:32.360508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.431µs"
	I1124 09:05:32.386954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.987919ms"
	I1124 09:05:32.387048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.305µs"
	I1124 09:05:36.514110       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [370631aaaf577fb6a343282108f71bb03e72ef6024de9d9f8e2a2eeb7e16e746] <==
	I1124 09:05:17.831726       1 server_others.go:69] "Using iptables proxy"
	I1124 09:05:17.841216       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1124 09:05:17.866087       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:05:17.868989       1 server_others.go:152] "Using iptables Proxier"
	I1124 09:05:17.869038       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 09:05:17.869048       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 09:05:17.869091       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 09:05:17.869396       1 server.go:846] "Version info" version="v1.28.0"
	I1124 09:05:17.869419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:05:17.870089       1 config.go:188] "Starting service config controller"
	I1124 09:05:17.870115       1 config.go:315] "Starting node config controller"
	I1124 09:05:17.870130       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 09:05:17.870125       1 config.go:97] "Starting endpoint slice config controller"
	I1124 09:05:17.870157       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 09:05:17.870135       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 09:05:17.970983       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 09:05:17.970991       1 shared_informer.go:318] Caches are synced for service config
	I1124 09:05:17.970967       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f5eddecfb179fe94de6b3892600fc1870efa5679c82874d72a3b301753e6f7d4] <==
	E1124 09:05:01.478877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 09:05:01.478878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 09:05:01.478887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 09:05:01.478907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 09:05:01.478997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 09:05:01.479055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 09:05:01.479077       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 09:05:01.479125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 09:05:02.313819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 09:05:02.313863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 09:05:02.319417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 09:05:02.319451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 09:05:02.429310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 09:05:02.429356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 09:05:02.538603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 09:05:02.538660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 09:05:02.549098       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 09:05:02.549140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 09:05:02.661900       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 09:05:02.661937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1124 09:05:02.666268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 09:05:02.666312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 09:05:02.688142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 09:05:02.688189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1124 09:05:03.073951       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 09:05:16 old-k8s-version-128377 kubelet[1521]: I1124 09:05:16.342896    1521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.183175    1521 topology_manager.go:215] "Topology Admit Handler" podUID="52128126-550d-4795-9fa1-e1d3d9510dd3" podNamespace="kube-system" podName="kube-proxy-fpbs2"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.188113    1521 topology_manager.go:215] "Topology Admit Handler" podUID="49954742-ea7f-466f-80d8-7d6ac88ce36c" podNamespace="kube-system" podName="kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338200    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzbjt\" (UniqueName: \"kubernetes.io/projected/52128126-550d-4795-9fa1-e1d3d9510dd3-kube-api-access-vzbjt\") pod \"kube-proxy-fpbs2\" (UID: \"52128126-550d-4795-9fa1-e1d3d9510dd3\") " pod="kube-system/kube-proxy-fpbs2"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338280    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/49954742-ea7f-466f-80d8-7d6ac88ce36c-cni-cfg\") pod \"kindnet-gbp66\" (UID: \"49954742-ea7f-466f-80d8-7d6ac88ce36c\") " pod="kube-system/kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338319    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52128126-550d-4795-9fa1-e1d3d9510dd3-lib-modules\") pod \"kube-proxy-fpbs2\" (UID: \"52128126-550d-4795-9fa1-e1d3d9510dd3\") " pod="kube-system/kube-proxy-fpbs2"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338351    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49954742-ea7f-466f-80d8-7d6ac88ce36c-lib-modules\") pod \"kindnet-gbp66\" (UID: \"49954742-ea7f-466f-80d8-7d6ac88ce36c\") " pod="kube-system/kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338392    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/52128126-550d-4795-9fa1-e1d3d9510dd3-kube-proxy\") pod \"kube-proxy-fpbs2\" (UID: \"52128126-550d-4795-9fa1-e1d3d9510dd3\") " pod="kube-system/kube-proxy-fpbs2"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338424    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49954742-ea7f-466f-80d8-7d6ac88ce36c-xtables-lock\") pod \"kindnet-gbp66\" (UID: \"49954742-ea7f-466f-80d8-7d6ac88ce36c\") " pod="kube-system/kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338473    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5l7\" (UniqueName: \"kubernetes.io/projected/49954742-ea7f-466f-80d8-7d6ac88ce36c-kube-api-access-cd5l7\") pod \"kindnet-gbp66\" (UID: \"49954742-ea7f-466f-80d8-7d6ac88ce36c\") " pod="kube-system/kindnet-gbp66"
	Nov 24 09:05:17 old-k8s-version-128377 kubelet[1521]: I1124 09:05:17.338537    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52128126-550d-4795-9fa1-e1d3d9510dd3-xtables-lock\") pod \"kube-proxy-fpbs2\" (UID: \"52128126-550d-4795-9fa1-e1d3d9510dd3\") " pod="kube-system/kube-proxy-fpbs2"
	Nov 24 09:05:18 old-k8s-version-128377 kubelet[1521]: I1124 09:05:18.914069    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fpbs2" podStartSLOduration=1.913988204 podCreationTimestamp="2025-11-24 09:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:18.331224336 +0000 UTC m=+14.156867889" watchObservedRunningTime="2025-11-24 09:05:18.913988204 +0000 UTC m=+14.739631764"
	Nov 24 09:05:21 old-k8s-version-128377 kubelet[1521]: I1124 09:05:21.337175    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gbp66" podStartSLOduration=1.258069975 podCreationTimestamp="2025-11-24 09:05:17 +0000 UTC" firstStartedPulling="2025-11-24 09:05:17.956037798 +0000 UTC m=+13.781681343" lastFinishedPulling="2025-11-24 09:05:21.035088666 +0000 UTC m=+16.860732211" observedRunningTime="2025-11-24 09:05:21.33698865 +0000 UTC m=+17.162632223" watchObservedRunningTime="2025-11-24 09:05:21.337120843 +0000 UTC m=+17.162764404"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.576686    1521 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.597206    1521 topology_manager.go:215] "Topology Admit Handler" podUID="7e4f56c0-0b49-47cd-9278-129ad898b781" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.598949    1521 topology_manager.go:215] "Topology Admit Handler" podUID="b84bae0f-9f75-4d1c-b2ed-da0c10a141cf" podNamespace="kube-system" podName="coredns-5dd5756b68-vxxnm"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.745876    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7e4f56c0-0b49-47cd-9278-129ad898b781-tmp\") pod \"storage-provisioner\" (UID: \"7e4f56c0-0b49-47cd-9278-129ad898b781\") " pod="kube-system/storage-provisioner"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.746005    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b84bae0f-9f75-4d1c-b2ed-da0c10a141cf-config-volume\") pod \"coredns-5dd5756b68-vxxnm\" (UID: \"b84bae0f-9f75-4d1c-b2ed-da0c10a141cf\") " pod="kube-system/coredns-5dd5756b68-vxxnm"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.746049    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87ck\" (UniqueName: \"kubernetes.io/projected/b84bae0f-9f75-4d1c-b2ed-da0c10a141cf-kube-api-access-s87ck\") pod \"coredns-5dd5756b68-vxxnm\" (UID: \"b84bae0f-9f75-4d1c-b2ed-da0c10a141cf\") " pod="kube-system/coredns-5dd5756b68-vxxnm"
	Nov 24 09:05:31 old-k8s-version-128377 kubelet[1521]: I1124 09:05:31.746075    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp79g\" (UniqueName: \"kubernetes.io/projected/7e4f56c0-0b49-47cd-9278-129ad898b781-kube-api-access-mp79g\") pod \"storage-provisioner\" (UID: \"7e4f56c0-0b49-47cd-9278-129ad898b781\") " pod="kube-system/storage-provisioner"
	Nov 24 09:05:32 old-k8s-version-128377 kubelet[1521]: I1124 09:05:32.360059    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vxxnm" podStartSLOduration=15.360007602 podCreationTimestamp="2025-11-24 09:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:32.35995945 +0000 UTC m=+28.185603012" watchObservedRunningTime="2025-11-24 09:05:32.360007602 +0000 UTC m=+28.185651165"
	Nov 24 09:05:32 old-k8s-version-128377 kubelet[1521]: I1124 09:05:32.379733    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.379681272 podCreationTimestamp="2025-11-24 09:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:32.370112867 +0000 UTC m=+28.195756426" watchObservedRunningTime="2025-11-24 09:05:32.379681272 +0000 UTC m=+28.205324835"
	Nov 24 09:05:34 old-k8s-version-128377 kubelet[1521]: I1124 09:05:34.439352    1521 topology_manager.go:215] "Topology Admit Handler" podUID="bfaec734-d874-4dcb-b31f-feb87adccfca" podNamespace="default" podName="busybox"
	Nov 24 09:05:34 old-k8s-version-128377 kubelet[1521]: I1124 09:05:34.561236    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwqg6\" (UniqueName: \"kubernetes.io/projected/bfaec734-d874-4dcb-b31f-feb87adccfca-kube-api-access-qwqg6\") pod \"busybox\" (UID: \"bfaec734-d874-4dcb-b31f-feb87adccfca\") " pod="default/busybox"
	Nov 24 09:05:38 old-k8s-version-128377 kubelet[1521]: I1124 09:05:38.375611    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.866491732 podCreationTimestamp="2025-11-24 09:05:34 +0000 UTC" firstStartedPulling="2025-11-24 09:05:34.853152472 +0000 UTC m=+30.678796027" lastFinishedPulling="2025-11-24 09:05:37.362217947 +0000 UTC m=+33.187861503" observedRunningTime="2025-11-24 09:05:38.375372923 +0000 UTC m=+34.201016485" watchObservedRunningTime="2025-11-24 09:05:38.375557208 +0000 UTC m=+34.201200770"
	
	
	==> storage-provisioner [a9a5857553e67019e47641c1970bb0d5555afd6b608c94a94501dd485efac0c4] <==
	I1124 09:05:32.081185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:05:32.090604       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:05:32.090641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 09:05:32.097885       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:05:32.097963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"742d8911-ea16-4251-8cf0-6f909959732d", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-128377_807761f2-87be-4f83-a3e6-a9218ea13b30 became leader
	I1124 09:05:32.098144       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-128377_807761f2-87be-4f83-a3e6-a9218ea13b30!
	I1124 09:05:32.198942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-128377_807761f2-87be-4f83-a3e6-a9218ea13b30!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-128377 -n old-k8s-version-128377
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-128377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-820576 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ed19b18b-e761-4aff-8676-38be0169fca8] Pending
helpers_test.go:352: "busybox" [ed19b18b-e761-4aff-8676-38be0169fca8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ed19b18b-e761-4aff-8676-38be0169fca8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003427853s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-820576 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-820576
helpers_test.go:243: (dbg) docker inspect no-preload-820576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2",
	        "Created": "2025-11-24T09:04:50.428873291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696697,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:04:50.865515581Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2/hosts",
	        "LogPath": "/var/lib/docker/containers/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2-json.log",
	        "Name": "/no-preload-820576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-820576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-820576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2",
	                "LowerDir": "/var/lib/docker/overlay2/cef831c44676981960379b41c7a7ce597355fd430968301d8adaa7f1c89ecabf-init/diff:/var/lib/docker/overlay2/a062700147ad5d1f8f2a68f70ba6ad34ea6495dd365bcb265ab17ea27961837b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cef831c44676981960379b41c7a7ce597355fd430968301d8adaa7f1c89ecabf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cef831c44676981960379b41c7a7ce597355fd430968301d8adaa7f1c89ecabf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cef831c44676981960379b41c7a7ce597355fd430968301d8adaa7f1c89ecabf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-820576",
	                "Source": "/var/lib/docker/volumes/no-preload-820576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-820576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-820576",
	                "name.minikube.sigs.k8s.io": "no-preload-820576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d00e2266ea6274ea021af231036b967845b3499983d5775fb4cea7d5b1677a4e",
	            "SandboxKey": "/var/run/docker/netns/d00e2266ea62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-820576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7957ce7dc9aefa9cad531fe591f93551c8388eaf00488d017c6e11e46821fce7",
	                    "EndpointID": "da19cc42121dc67bd6d32b5462f319359aedb02efd9ff5344a89232e1394cff6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "52:15:8b:bd:8c:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-820576",
	                        "fbfc76af5db1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820576 -n no-preload-820576
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-820576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-820576 logs -n 25: (1.173827112s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-203355 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                              │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                              │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p missing-upgrade-058813                                                                                                                                                                                                                           │ missing-upgrade-058813 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ ssh     │ -p cilium-203355 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo docker system info                                                                                                                                                                                                            │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo containerd config dump                                                                                                                                                                                                        │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo crio config                                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p cilium-203355                                                                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ start   │ -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ start   │ -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:04:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:04:47.686335  696018 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:04:47.686445  696018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:47.686456  696018 out.go:374] Setting ErrFile to fd 2...
	I1124 09:04:47.686474  696018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:47.686683  696018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:04:47.687133  696018 out.go:368] Setting JSON to false
	I1124 09:04:47.688408  696018 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13624,"bootTime":1763961464,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:04:47.688532  696018 start.go:143] virtualization: kvm guest
	I1124 09:04:47.690354  696018 out.go:179] * [no-preload-820576] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:04:47.691472  696018 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:04:47.691501  696018 notify.go:221] Checking for updates...
	I1124 09:04:47.693590  696018 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:04:47.694681  696018 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:04:47.695683  696018 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:04:47.697109  696018 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:04:47.698248  696018 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:04:47.699807  696018 config.go:182] Loaded profile config "cert-expiration-869306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:04:47.699947  696018 config.go:182] Loaded profile config "kubernetes-upgrade-521313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:47.700091  696018 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:04:47.700236  696018 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:04:47.724639  696018 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:04:47.724770  696018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:47.791833  696018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 09:04:47.780432821 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:47.791998  696018 docker.go:319] overlay module found
	I1124 09:04:47.794089  696018 out.go:179] * Using the docker driver based on user configuration
	I1124 09:04:47.795621  696018 start.go:309] selected driver: docker
	I1124 09:04:47.795639  696018 start.go:927] validating driver "docker" against <nil>
	I1124 09:04:47.795651  696018 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:04:47.796325  696018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:47.859511  696018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 09:04:47.848833175 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:47.859748  696018 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:04:47.859957  696018 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:04:47.861778  696018 out.go:179] * Using Docker driver with root privileges
	I1124 09:04:47.862632  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:04:47.862696  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:04:47.862708  696018 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:04:47.862775  696018 start.go:353] cluster config:
	{Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:04:47.863875  696018 out.go:179] * Starting "no-preload-820576" primary control-plane node in "no-preload-820576" cluster
	I1124 09:04:47.864812  696018 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 09:04:47.865865  696018 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:04:47.866835  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:04:47.866921  696018 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:04:47.866958  696018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:04:47.867001  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json: {Name:mk04f43d651118a00ac1be32029cffb149669d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:47.867208  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:47.890231  696018 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:04:47.890260  696018 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:04:47.890281  696018 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:04:47.890321  696018 start.go:360] acquireMachinesLock for no-preload-820576: {Name:mk6b6fb581999217c645edacaa9c18971e97964f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:47.890432  696018 start.go:364] duration metric: took 88.402µs to acquireMachinesLock for "no-preload-820576"
	I1124 09:04:47.890474  696018 start.go:93] Provisioning new machine with config: &{Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:04:47.890567  696018 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:04:48.739369  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:40906->192.168.76.2:8443: read: connection reset by peer
	I1124 09:04:48.739430  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:48.740184  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:48.920539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:48.921019  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:49.420530  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:49.420996  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:46.813535  695520 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:04:46.813778  695520 start.go:159] libmachine.API.Create for "old-k8s-version-128377" (driver="docker")
	I1124 09:04:46.813816  695520 client.go:173] LocalClient.Create starting
	I1124 09:04:46.813892  695520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:04:46.813936  695520 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:46.813967  695520 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:46.814043  695520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:04:46.814076  695520 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:46.814095  695520 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:46.814441  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:04:46.831913  695520 cli_runner.go:211] docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:04:46.831996  695520 network_create.go:284] running [docker network inspect old-k8s-version-128377] to gather additional debugging logs...
	I1124 09:04:46.832018  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377
	W1124 09:04:46.848875  695520 cli_runner.go:211] docker network inspect old-k8s-version-128377 returned with exit code 1
	I1124 09:04:46.848912  695520 network_create.go:287] error running [docker network inspect old-k8s-version-128377]: docker network inspect old-k8s-version-128377: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-128377 not found
	I1124 09:04:46.848928  695520 network_create.go:289] output of [docker network inspect old-k8s-version-128377]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-128377 not found
	
	** /stderr **
	I1124 09:04:46.849044  695520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:46.866840  695520 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:04:46.867443  695520 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:04:46.868124  695520 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:04:46.868877  695520 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:04:46.869272  695520 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9bf62793deff IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0a:d1:a9:3b:89:29} reservation:<nil>}
	I1124 09:04:46.869983  695520 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5fa0f78c53ad IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:9e:96:d6:0a:fe:a6} reservation:<nil>}
	I1124 09:04:46.870809  695520 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e158e0}
	I1124 09:04:46.870832  695520 network_create.go:124] attempt to create docker network old-k8s-version-128377 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 09:04:46.870880  695520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-128377 old-k8s-version-128377
	I1124 09:04:46.993201  695520 network_create.go:108] docker network old-k8s-version-128377 192.168.103.0/24 created
	I1124 09:04:46.993243  695520 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-128377" container
	I1124 09:04:46.993321  695520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:04:47.015308  695520 cli_runner.go:164] Run: docker volume create old-k8s-version-128377 --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:04:47.034791  695520 oci.go:103] Successfully created a docker volume old-k8s-version-128377
	I1124 09:04:47.034869  695520 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-128377-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --entrypoint /usr/bin/test -v old-k8s-version-128377:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:04:47.772927  695520 oci.go:107] Successfully prepared a docker volume old-k8s-version-128377
	I1124 09:04:47.773023  695520 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:04:47.773041  695520 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:04:47.773133  695520 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-128377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:04:50.987600  695520 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-128377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.214396647s)
	I1124 09:04:50.987639  695520 kic.go:203] duration metric: took 3.214593361s to extract preloaded images to volume ...
	W1124 09:04:50.987789  695520 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:04:50.987849  695520 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:04:50.987920  695520 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:04:51.061728  695520 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-128377 --name old-k8s-version-128377 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-128377 --network old-k8s-version-128377 --ip 192.168.103.2 --volume old-k8s-version-128377:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:04:51.401514  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Running}}
	I1124 09:04:51.426748  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.456228  695520 cli_runner.go:164] Run: docker exec old-k8s-version-128377 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:04:51.515517  695520 oci.go:144] the created container "old-k8s-version-128377" has a running status.
	I1124 09:04:51.515571  695520 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa...
	I1124 09:04:47.893309  696018 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:04:47.893645  696018 start.go:159] libmachine.API.Create for "no-preload-820576" (driver="docker")
	I1124 09:04:47.893687  696018 client.go:173] LocalClient.Create starting
	I1124 09:04:47.893789  696018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:04:47.893833  696018 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:47.893861  696018 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:47.893953  696018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:04:47.893982  696018 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:47.893999  696018 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:47.894436  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:04:47.915789  696018 cli_runner.go:211] docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:04:47.915886  696018 network_create.go:284] running [docker network inspect no-preload-820576] to gather additional debugging logs...
	I1124 09:04:47.915925  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576
	W1124 09:04:47.939725  696018 cli_runner.go:211] docker network inspect no-preload-820576 returned with exit code 1
	I1124 09:04:47.939760  696018 network_create.go:287] error running [docker network inspect no-preload-820576]: docker network inspect no-preload-820576: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-820576 not found
	I1124 09:04:47.939788  696018 network_create.go:289] output of [docker network inspect no-preload-820576]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-820576 not found
	
	** /stderr **
	I1124 09:04:47.939956  696018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:47.960368  696018 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:04:47.961456  696018 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:04:47.962397  696018 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:04:47.963597  696018 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:04:47.964832  696018 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e9cf50}
	I1124 09:04:47.964868  696018 network_create.go:124] attempt to create docker network no-preload-820576 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 09:04:47.964929  696018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-820576 no-preload-820576
	I1124 09:04:48.017684  696018 network_create.go:108] docker network no-preload-820576 192.168.85.0/24 created
	I1124 09:04:48.017725  696018 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-820576" container
	I1124 09:04:48.017804  696018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:04:48.037793  696018 cli_runner.go:164] Run: docker volume create no-preload-820576 --label name.minikube.sigs.k8s.io=no-preload-820576 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:04:48.057638  696018 oci.go:103] Successfully created a docker volume no-preload-820576
	I1124 09:04:48.057738  696018 cli_runner.go:164] Run: docker run --rm --name no-preload-820576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --entrypoint /usr/bin/test -v no-preload-820576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:04:48.192090  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:48.509962  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:48.827547  696018 cache.go:107] acquiring lock: {Name:mkbcabeb5a23ff077ffdad64c71e9fe699d94040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827544  696018 cache.go:107] acquiring lock: {Name:mk92c82896924ab47423467b25ccd98ee4128baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827656  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:04:48.827672  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:04:48.827672  696018 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 138.757µs
	I1124 09:04:48.827689  696018 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:04:48.827683  696018 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 176.678µs
	I1124 09:04:48.827708  696018 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827708  696018 cache.go:107] acquiring lock: {Name:mkf3a006b133f81ed32779d427a8d0a9b25f9000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827735  696018 cache.go:107] acquiring lock: {Name:mkd74819cb24442927f7fb2cffd47478de40e14c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827766  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:04:48.827773  696018 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 69.196µs
	I1124 09:04:48.827780  696018 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:04:48.827788  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:04:48.827796  696018 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 65.204µs
	I1124 09:04:48.827804  696018 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:04:48.827791  696018 cache.go:107] acquiring lock: {Name:mk6b573bbd33cfc3c3f77668030fb064598572fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827820  696018 cache.go:107] acquiring lock: {Name:mk7f052905284f586f4f1cf24b8c34cc48e0b85b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827866  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:04:48.827873  696018 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 57.027µs
	I1124 09:04:48.827882  696018 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827796  696018 cache.go:107] acquiring lock: {Name:mk1d635b72f6d026600360916178f900a450350e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827887  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:04:48.827900  696018 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 115.907µs
	I1124 09:04:48.827910  696018 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:04:48.827914  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:04:48.827921  696018 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 128.45µs
	I1124 09:04:48.827937  696018 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827719  696018 cache.go:107] acquiring lock: {Name:mk8023690ce5b18d9a1789b2f878bf92c1381799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.828021  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:04:48.828033  696018 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 327.502µs
	I1124 09:04:48.828051  696018 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:04:48.828067  696018 cache.go:87] Successfully saved all images to host disk.
	I1124 09:04:50.353018  696018 cli_runner.go:217] Completed: docker run --rm --name no-preload-820576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --entrypoint /usr/bin/test -v no-preload-820576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.295229864s)
	I1124 09:04:50.353061  696018 oci.go:107] Successfully prepared a docker volume no-preload-820576
	I1124 09:04:50.353130  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1124 09:04:50.353205  696018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:04:50.353233  696018 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:04:50.353275  696018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:04:50.412447  696018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-820576 --name no-preload-820576 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-820576 --network no-preload-820576 --ip 192.168.85.2 --volume no-preload-820576:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:04:51.174340  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Running}}
	I1124 09:04:51.195074  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.216706  696018 cli_runner.go:164] Run: docker exec no-preload-820576 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:04:51.270513  696018 oci.go:144] the created container "no-preload-820576" has a running status.
	I1124 09:04:51.270555  696018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa...
	I1124 09:04:51.639069  696018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:04:51.669871  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.693409  696018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:04:51.693441  696018 kic_runner.go:114] Args: [docker exec --privileged no-preload-820576 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:04:51.754414  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.781590  696018 machine.go:94] provisionDockerMachine start ...
	I1124 09:04:51.781685  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:51.808597  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.809054  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:51.809092  696018 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:04:51.963230  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:04:51.963276  696018 ubuntu.go:182] provisioning hostname "no-preload-820576"
	I1124 09:04:51.963339  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:51.984069  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.984406  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:51.984432  696018 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-820576 && echo "no-preload-820576" | sudo tee /etc/hostname
	I1124 09:04:52.142431  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:04:52.142545  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.163141  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.163483  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:52.163520  696018 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820576/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:04:52.313074  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:04:52.313103  696018 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:04:52.313151  696018 ubuntu.go:190] setting up certificates
	I1124 09:04:52.313174  696018 provision.go:84] configureAuth start
	I1124 09:04:52.313241  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.333178  696018 provision.go:143] copyHostCerts
	I1124 09:04:52.333250  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:04:52.333267  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:04:52.333340  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:04:52.333454  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:04:52.333479  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:04:52.333527  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:04:52.333610  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:04:52.333631  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:04:52.333670  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:04:52.333745  696018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.no-preload-820576 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820576]
	I1124 09:04:52.372869  696018 provision.go:177] copyRemoteCerts
	I1124 09:04:52.372936  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:04:52.372984  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.391516  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.495715  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:04:52.515508  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:04:52.533110  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:04:52.549620  696018 provision.go:87] duration metric: took 236.431147ms to configureAuth
	I1124 09:04:52.549643  696018 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:04:52.549785  696018 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:52.549795  696018 machine.go:97] duration metric: took 768.185276ms to provisionDockerMachine
	I1124 09:04:52.549801  696018 client.go:176] duration metric: took 4.656107804s to LocalClient.Create
	I1124 09:04:52.549817  696018 start.go:167] duration metric: took 4.656176839s to libmachine.API.Create "no-preload-820576"
	I1124 09:04:52.549827  696018 start.go:293] postStartSetup for "no-preload-820576" (driver="docker")
	I1124 09:04:52.549837  696018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:04:52.549917  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:04:52.549957  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.567598  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.670209  696018 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:04:52.673794  696018 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:04:52.673819  696018 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:04:52.673829  696018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:04:52.673873  696018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:04:52.673954  696018 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:04:52.674055  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:04:52.681571  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:51.668051  695520 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:04:51.701732  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.724111  695520 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:04:51.724139  695520 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-128377 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:04:51.779671  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.808240  695520 machine.go:94] provisionDockerMachine start ...
	I1124 09:04:51.808514  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:51.833533  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.833868  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:51.833890  695520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:04:51.988683  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-128377
	
	I1124 09:04:51.988712  695520 ubuntu.go:182] provisioning hostname "old-k8s-version-128377"
	I1124 09:04:51.988769  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.008953  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.009275  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:52.009299  695520 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-128377 && echo "old-k8s-version-128377" | sudo tee /etc/hostname
	I1124 09:04:52.164712  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-128377
	
	I1124 09:04:52.164811  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.184388  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.184674  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:52.184701  695520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-128377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-128377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-128377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:04:52.328284  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:04:52.328315  695520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:04:52.328349  695520 ubuntu.go:190] setting up certificates
	I1124 09:04:52.328371  695520 provision.go:84] configureAuth start
	I1124 09:04:52.328437  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.347382  695520 provision.go:143] copyHostCerts
	I1124 09:04:52.347441  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:04:52.347449  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:04:52.347530  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:04:52.347615  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:04:52.347624  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:04:52.347646  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:04:52.347699  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:04:52.347707  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:04:52.347724  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:04:52.347767  695520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-128377 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-128377]
	I1124 09:04:52.449836  695520 provision.go:177] copyRemoteCerts
	I1124 09:04:52.449907  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:04:52.449955  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.467389  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.568756  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:04:52.590911  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:04:52.608291  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:04:52.625476  695520 provision.go:87] duration metric: took 297.076146ms to configureAuth
	I1124 09:04:52.625501  695520 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:04:52.625684  695520 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:04:52.625697  695520 machine.go:97] duration metric: took 817.329123ms to provisionDockerMachine
	I1124 09:04:52.625703  695520 client.go:176] duration metric: took 5.811878386s to LocalClient.Create
	I1124 09:04:52.625724  695520 start.go:167] duration metric: took 5.811947677s to libmachine.API.Create "old-k8s-version-128377"
	I1124 09:04:52.625737  695520 start.go:293] postStartSetup for "old-k8s-version-128377" (driver="docker")
	I1124 09:04:52.625751  695520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:04:52.625805  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:04:52.625861  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.643125  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.746507  695520 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:04:52.750419  695520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:04:52.750446  695520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:04:52.750471  695520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:04:52.750527  695520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:04:52.750621  695520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:04:52.750735  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:04:52.759275  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:52.779524  695520 start.go:296] duration metric: took 153.769147ms for postStartSetup
	I1124 09:04:52.779876  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.797331  695520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/config.json ...
	I1124 09:04:52.797607  695520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:04:52.797652  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.814633  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.914421  695520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:04:52.919231  695520 start.go:128] duration metric: took 6.107446039s to createHost
	I1124 09:04:52.919259  695520 start.go:83] releasing machines lock for "old-k8s-version-128377", held for 6.10762389s
	I1124 09:04:52.919326  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.937920  695520 ssh_runner.go:195] Run: cat /version.json
	I1124 09:04:52.937964  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.937993  695520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:04:52.938073  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.957005  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.957162  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:53.162492  695520 ssh_runner.go:195] Run: systemctl --version
	I1124 09:04:53.168749  695520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:04:53.173128  695520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:04:53.173198  695520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:04:53.196703  695520 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:04:53.196732  695520 start.go:496] detecting cgroup driver to use...
	I1124 09:04:53.196770  695520 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:04:53.196824  695520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:04:53.212821  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:04:53.226105  695520 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:04:53.226149  695520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:04:53.245323  695520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:04:53.261892  695520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:04:53.346225  695520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:04:53.440817  695520 docker.go:234] disabling docker service ...
	I1124 09:04:53.440886  695520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:04:53.466043  695520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:04:53.478621  695520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:04:53.566248  695520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:04:53.652228  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:04:53.665204  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:04:53.679300  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 09:04:53.689354  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:04:53.697996  695520 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:04:53.698043  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:04:53.706349  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.715138  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:04:53.724198  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.732594  695520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:04:53.740362  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:04:53.748766  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:04:53.757048  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:04:53.765265  695520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:04:53.772343  695520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:04:53.779254  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:53.856087  695520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:04:53.959050  695520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:04:53.959110  695520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:04:53.963133  695520 start.go:564] Will wait 60s for crictl version
	I1124 09:04:53.963185  695520 ssh_runner.go:195] Run: which crictl
	I1124 09:04:53.966895  695520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:04:53.994878  695520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:04:53.994934  695520 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.021265  695520 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.045827  695520 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 09:04:52.701569  696018 start.go:296] duration metric: took 151.731915ms for postStartSetup
	I1124 09:04:52.701858  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.719203  696018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:04:52.719424  696018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:04:52.719488  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.736084  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.835481  696018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:04:52.840061  696018 start.go:128] duration metric: took 4.94947332s to createHost
	I1124 09:04:52.840083  696018 start.go:83] releasing machines lock for "no-preload-820576", held for 4.94964132s
	I1124 09:04:52.840148  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.858132  696018 ssh_runner.go:195] Run: cat /version.json
	I1124 09:04:52.858160  696018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:04:52.858222  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.858246  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.877130  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.877482  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.975607  696018 ssh_runner.go:195] Run: systemctl --version
	I1124 09:04:53.031452  696018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:04:53.036065  696018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:04:53.036130  696018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:04:53.059999  696018 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:04:53.060024  696018 start.go:496] detecting cgroup driver to use...
	I1124 09:04:53.060062  696018 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:04:53.060105  696018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:04:53.074505  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:04:53.086089  696018 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:04:53.086143  696018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:04:53.101555  696018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:04:53.118093  696018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:04:53.204201  696018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:04:53.300933  696018 docker.go:234] disabling docker service ...
	I1124 09:04:53.301034  696018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:04:53.320036  696018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:04:53.331959  696018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:04:53.420508  696018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:04:53.513830  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:04:53.526253  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:04:53.540562  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:53.865082  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:04:53.876277  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:04:53.885584  696018 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:04:53.885655  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:04:53.895158  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.904766  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:04:53.913841  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.922747  696018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:04:53.932360  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:04:53.943272  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:04:53.952416  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:04:53.961850  696018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:04:53.969795  696018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:04:53.977270  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:54.067216  696018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:04:54.151776  696018 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:04:54.151849  696018 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:04:54.156309  696018 start.go:564] Will wait 60s for crictl version
	I1124 09:04:54.156367  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:54.160683  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:04:54.187130  696018 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:04:54.187193  696018 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.208524  696018 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.233294  696018 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1124 09:04:49.920675  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:49.921171  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:50.420805  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:50.421212  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:50.920534  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:54.046841  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:54.064168  695520 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:04:54.068915  695520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.079411  695520 kubeadm.go:884] updating cluster {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:04:54.079584  695520 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:04:54.079651  695520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:54.105064  695520 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:04:54.105092  695520 containerd.go:534] Images already preloaded, skipping extraction
	I1124 09:04:54.105153  695520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:54.131723  695520 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:04:54.131746  695520 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:04:54.131756  695520 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 containerd true true} ...
	I1124 09:04:54.131858  695520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-128377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:04:54.131921  695520 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:04:54.160918  695520 cni.go:84] Creating CNI manager for ""
	I1124 09:04:54.160940  695520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:04:54.160955  695520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:04:54.160976  695520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-128377 NodeName:old-k8s-version-128377 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:04:54.161123  695520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-128377"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:04:54.161190  695520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 09:04:54.169102  695520 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:04:54.169150  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:04:54.176962  695520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1124 09:04:54.191252  695520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:04:54.206931  695520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1124 09:04:54.220958  695520 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:04:54.225158  695520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.236116  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:54.319599  695520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:04:54.342135  695520 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377 for IP: 192.168.103.2
	I1124 09:04:54.342157  695520 certs.go:195] generating shared ca certs ...
	I1124 09:04:54.342176  695520 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.342355  695520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:04:54.342406  695520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:04:54.342416  695520 certs.go:257] generating profile certs ...
	I1124 09:04:54.342497  695520 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key
	I1124 09:04:54.342513  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt with IP's: []
	I1124 09:04:54.488402  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt ...
	I1124 09:04:54.488432  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt: {Name:mk87cd521056210340bc5798f0387b3f36dc4635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.488613  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key ...
	I1124 09:04:54.488628  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key: {Name:mk03c81f6da2f2b54dfd9fa0e30866e3372921ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.488712  695520 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1
	I1124 09:04:54.488729  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 09:04:54.543616  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 ...
	I1124 09:04:54.543654  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1: {Name:mk2f5faeeb1a8cba2153625fbd7d3a7e54f95aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.543851  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1 ...
	I1124 09:04:54.543873  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1: {Name:mk7ed4cadcafdc2e1a661255372b702ae6719654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.543964  695520 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt
	I1124 09:04:54.544040  695520 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key
	I1124 09:04:54.544132  695520 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key
	I1124 09:04:54.544150  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt with IP's: []
	I1124 09:04:54.594781  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt ...
	I1124 09:04:54.594837  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt: {Name:mk33ff647329a0bdf714fd27ddf109ec15b6d483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.595015  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key ...
	I1124 09:04:54.595034  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key: {Name:mk9bf52d92c35c053f63b6073f2a38e1ff2182d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.595287  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:04:54.595344  695520 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:04:54.595359  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:04:54.595395  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:04:54.595433  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:04:54.595484  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:04:54.595553  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:54.596350  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:04:54.616384  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:04:54.633998  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:04:54.651552  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:04:54.669737  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 09:04:54.686876  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:04:54.703726  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:04:54.720840  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:04:54.737534  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:04:54.757717  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:04:54.774715  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:04:54.791052  695520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:04:54.802968  695520 ssh_runner.go:195] Run: openssl version
	I1124 09:04:54.808893  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:04:54.816748  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.820220  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.820260  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.854133  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:04:54.862216  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:04:54.870277  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.873860  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.873906  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.910146  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:04:54.919148  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:04:54.927753  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.931870  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.931921  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.972285  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:04:54.981223  695520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:04:54.984999  695520 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:04:54.985067  695520 kubeadm.go:401] StartCluster: {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:04:54.985165  695520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:04:54.985213  695520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:04:55.012874  695520 cri.go:89] found id: ""
	I1124 09:04:55.012940  695520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:04:55.020831  695520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:04:55.029069  695520 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:04:55.029111  695520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:04:55.036334  695520 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:04:55.036348  695520 kubeadm.go:158] found existing configuration files:
	
	I1124 09:04:55.036384  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:04:55.044532  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:04:55.044579  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:04:55.051885  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:04:55.059335  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:04:55.059381  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:04:55.066924  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:04:55.075157  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:04:55.075202  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:04:55.082536  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:04:55.090276  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:04:55.090333  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:04:55.097848  695520 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:04:55.141844  695520 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 09:04:55.142222  695520 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:04:55.176293  695520 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:04:55.176360  695520 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:04:55.176399  695520 kubeadm.go:319] OS: Linux
	I1124 09:04:55.176522  695520 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:04:55.176607  695520 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:04:55.176692  695520 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:04:55.176788  695520 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:04:55.176861  695520 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:04:55.176926  695520 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:04:55.177000  695520 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:04:55.177072  695520 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:04:55.267260  695520 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:04:55.267430  695520 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:04:55.267573  695520 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 09:04:55.406819  695520 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:04:55.408942  695520 out.go:252]   - Generating certificates and keys ...
	I1124 09:04:55.409040  695520 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:04:55.409154  695520 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:04:55.535942  695520 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:04:55.747446  695520 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:04:56.231180  695520 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:04:56.348617  695520 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:04:56.564540  695520 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:04:56.564771  695520 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-128377] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:04:54.234417  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:54.252265  696018 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:04:54.256402  696018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.271173  696018 kubeadm.go:884] updating cluster {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:04:54.271376  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:54.585565  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:54.895614  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:55.213448  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:04:55.213537  696018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:55.248674  696018 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:04:55.248704  696018 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:04:55.248761  696018 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:55.248818  696018 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.248841  696018 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.248860  696018 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.248864  696018 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.248833  696018 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.248841  696018 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.249034  696018 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.250186  696018 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.250215  696018 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:55.250182  696018 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.250186  696018 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.250253  696018 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.250254  696018 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.250188  696018 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.250648  696018 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.411211  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139"
	I1124 09:04:55.411274  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.432666  696018 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:04:55.432717  696018 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.432775  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.436380  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810"
	I1124 09:04:55.436448  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.436570  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.438317  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b"
	I1124 09:04:55.438376  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.445544  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc"
	I1124 09:04:55.445608  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.462611  696018 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:04:55.462672  696018 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.462735  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.466873  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1124 09:04:55.466944  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1124 09:04:55.469707  696018 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:04:55.469760  696018 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.469761  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.469806  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.476188  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.24-0" and sha "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d"
	I1124 09:04:55.476260  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.476601  696018 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:04:55.476645  696018 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.476700  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.476760  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.483510  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46"
	I1124 09:04:55.483571  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.493634  696018 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 09:04:55.493674  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.493687  696018 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.493730  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.504559  696018 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d" in container runtime
	I1124 09:04:55.504599  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.504606  696018 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.504646  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.512866  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.512892  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.512910  696018 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:04:55.512950  696018 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.512990  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.526695  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.526717  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.526785  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.539513  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:04:55.539663  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:55.546674  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.546750  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.546715  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.564076  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.567023  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 09:04:55.567049  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.567061  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1124 09:04:55.567151  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.598524  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.598552  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.598652  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:04:55.598735  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:04:55.614879  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.624975  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:55.625072  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:55.679323  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.684055  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:04:55.684090  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.684124  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.684140  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:04:55.684150  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0
	I1124 09:04:55.684159  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.684160  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1124 09:04:55.684171  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1124 09:04:55.684244  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:55.736024  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 09:04:55.736135  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:55.746073  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.746108  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1124 09:04:55.746157  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:55.746175  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 09:04:55.746191  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (23728640 bytes)
	I1124 09:04:55.746248  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:55.801010  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 09:04:55.801049  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 09:04:55.808405  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.808441  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1124 09:04:55.880897  696018 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:55.880969  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:56.015999  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 09:04:56.068815  696018 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:56.068912  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:56.453297  696018 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1124 09:04:56.453371  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:57.304727  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.24-0: (1.235782073s)
	I1124 09:04:57.304763  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 09:04:57.304794  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:57.304806  696018 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:04:57.304847  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:57.304858  696018 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:57.304920  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:56.768431  695520 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:04:56.768677  695520 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-128377] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:04:57.042517  695520 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:04:57.135211  695520 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:04:57.487492  695520 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:04:57.487607  695520 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:04:57.647815  695520 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:04:57.788032  695520 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:04:58.007063  695520 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:04:58.262043  695520 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:04:58.262616  695520 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:04:58.265868  695520 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:04:55.921561  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:04:55.921607  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:58.266858  695520 out.go:252]   - Booting up control plane ...
	I1124 09:04:58.266989  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:04:58.267065  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:04:58.267746  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:04:58.282824  695520 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:04:58.283699  695520 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:04:58.283773  695520 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:04:58.419897  695520 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 09:04:58.797650  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.492766226s)
	I1124 09:04:58.797672  696018 ssh_runner.go:235] Completed: which crictl: (1.492732478s)
	I1124 09:04:58.797693  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:04:58.797722  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:58.797742  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:58.797763  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:59.494097  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:04:59.494141  696018 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:59.494193  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:59.494314  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:00.636087  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.141861944s)
	I1124 09:05:00.636150  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:05:00.636183  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:05:00.636184  696018 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.141835433s)
	I1124 09:05:00.636272  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:05:00.636277  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:01.829551  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.193240306s)
	I1124 09:05:01.829586  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:05:01.829561  696018 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.193259021s)
	I1124 09:05:01.829618  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:05:01.829656  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:05:01.829661  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:05:01.829741  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:02.922442  695520 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502768 seconds
	I1124 09:05:02.922650  695520 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:05:02.938003  695520 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:05:03.487168  695520 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:05:03.487569  695520 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-128377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:05:03.997647  695520 kubeadm.go:319] [bootstrap-token] Using token: jnao2u.ovlrxqviyhx4po41
	I1124 09:05:03.999063  695520 out.go:252]   - Configuring RBAC rules ...
	I1124 09:05:03.999223  695520 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:05:04.003823  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:05:04.010298  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:05:04.012923  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:05:04.015535  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:05:04.019043  695520 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:05:04.029389  695520 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:05:04.209549  695520 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:05:04.407855  695520 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:05:04.408750  695520 kubeadm.go:319] 
	I1124 09:05:04.408814  695520 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:05:04.408821  695520 kubeadm.go:319] 
	I1124 09:05:04.408930  695520 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:05:04.408949  695520 kubeadm.go:319] 
	I1124 09:05:04.408983  695520 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:05:04.409060  695520 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:05:04.409107  695520 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:05:04.409122  695520 kubeadm.go:319] 
	I1124 09:05:04.409207  695520 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:05:04.409227  695520 kubeadm.go:319] 
	I1124 09:05:04.409283  695520 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:05:04.409289  695520 kubeadm.go:319] 
	I1124 09:05:04.409340  695520 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:05:04.409401  695520 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:05:04.409519  695520 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:05:04.409531  695520 kubeadm.go:319] 
	I1124 09:05:04.409633  695520 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:05:04.409739  695520 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:05:04.409748  695520 kubeadm.go:319] 
	I1124 09:05:04.409856  695520 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jnao2u.ovlrxqviyhx4po41 \
	I1124 09:05:04.409989  695520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:05:04.410028  695520 kubeadm.go:319] 	--control-plane 
	I1124 09:05:04.410043  695520 kubeadm.go:319] 
	I1124 09:05:04.410157  695520 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:05:04.410168  695520 kubeadm.go:319] 
	I1124 09:05:04.410253  695520 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jnao2u.ovlrxqviyhx4po41 \
	I1124 09:05:04.410416  695520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:05:04.412734  695520 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:05:04.412863  695520 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:05:04.412887  695520 cni.go:84] Creating CNI manager for ""
	I1124 09:05:04.412895  695520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:04.414780  695520 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:05:00.922661  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:00.922710  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:04.415630  695520 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:05:04.420099  695520 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 09:05:04.420115  695520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:05:04.433073  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:05:05.091722  695520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:05:05.091870  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-128377 minikube.k8s.io/updated_at=2025_11_24T09_05_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=old-k8s-version-128377 minikube.k8s.io/primary=true
	I1124 09:05:05.092348  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:05.102498  695520 ops.go:34] apiserver oom_adj: -16
	I1124 09:05:05.174868  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:05.675283  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:06.175310  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:02.915588  696018 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.085815853s)
	I1124 09:05:02.915634  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.085954166s)
	I1124 09:05:02.915671  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:05:02.915639  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 09:05:02.915716  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 09:05:02.976753  696018 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:02.976825  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:03.348632  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:05:03.348678  696018 cache_images.go:125] Successfully loaded all cached images
	I1124 09:05:03.348686  696018 cache_images.go:94] duration metric: took 8.099965824s to LoadCachedImages
	I1124 09:05:03.348703  696018 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1124 09:05:03.348825  696018 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:05:03.348894  696018 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:05:03.376137  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:05:03.376168  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:03.376188  696018 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:05:03.376210  696018 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820576 NodeName:no-preload-820576 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:05:03.376350  696018 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-820576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:05:03.376422  696018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:05:03.385368  696018 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 09:05:03.385424  696018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:05:03.394095  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1124 09:05:03.394128  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:05:03.394180  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1124 09:05:03.394191  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 09:05:03.394205  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 09:05:03.394225  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:03.399712  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 09:05:03.399743  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1124 09:05:03.399797  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 09:05:03.399839  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1124 09:05:03.414063  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 09:05:03.448582  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 09:05:03.448623  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1124 09:05:03.941988  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:05:03.950659  696018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1124 09:05:03.964545  696018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:05:03.980698  696018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1124 09:05:03.994370  696018 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:05:03.999682  696018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:05:04.011951  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:04.105068  696018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:04.129581  696018 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576 for IP: 192.168.85.2
	I1124 09:05:04.129609  696018 certs.go:195] generating shared ca certs ...
	I1124 09:05:04.129631  696018 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.129796  696018 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:05:04.129861  696018 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:05:04.129876  696018 certs.go:257] generating profile certs ...
	I1124 09:05:04.129944  696018 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key
	I1124 09:05:04.129964  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt with IP's: []
	I1124 09:05:04.178331  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt ...
	I1124 09:05:04.178368  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt: {Name:mk7a6d48f62cb24db3b80fa6902658a2fab15360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.178586  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key ...
	I1124 09:05:04.178605  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key: {Name:mke761c4ec29e36beccc716dc800bc8fd841e3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.178724  696018 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632
	I1124 09:05:04.178748  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 09:05:04.417670  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 ...
	I1124 09:05:04.417694  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632: {Name:mk59a2d57d772e51aeeeb2a9a4dca760203e6d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.417874  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632 ...
	I1124 09:05:04.417897  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632: {Name:mkdb0be38fd80ef77438b49aa69b9308c6d28ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.418023  696018 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt
	I1124 09:05:04.418147  696018 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key
	I1124 09:05:04.418202  696018 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key
	I1124 09:05:04.418217  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt with IP's: []
	I1124 09:05:04.604435  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt ...
	I1124 09:05:04.604497  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt: {Name:mk5719f2112f16d39272baf4588ce9b65d33d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.604728  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key ...
	I1124 09:05:04.604746  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key: {Name:mk56d8ccc21a879d6506ee3380097e85fb4b4f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.605022  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:05:04.605073  696018 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:05:04.605084  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:05:04.605120  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:05:04.605160  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:05:04.605195  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:05:04.605369  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:05:04.606568  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:05:04.626964  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:05:04.644973  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:05:04.663649  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:05:04.681360  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:05:04.699027  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:05:04.716381  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:05:04.734298  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:05:04.752033  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:05:04.771861  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:05:04.789824  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:05:04.808313  696018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:05:04.826085  696018 ssh_runner.go:195] Run: openssl version
	I1124 09:05:04.834356  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:05:04.843772  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.848660  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.848725  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.887168  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:05:04.897113  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:05:04.907480  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.911694  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.911746  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.951326  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:05:04.961765  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:05:04.972056  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:05:04.976497  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:05:04.976554  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:05:05.017003  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:05:05.027292  696018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:05:05.031547  696018 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:05:05.031616  696018 kubeadm.go:401] StartCluster: {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:05:05.031711  696018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:05:05.031765  696018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:05:05.062044  696018 cri.go:89] found id: ""
	I1124 09:05:05.062126  696018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:05:05.071887  696018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:05:05.082157  696018 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:05:05.082217  696018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:05:05.091225  696018 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:05:05.091248  696018 kubeadm.go:158] found existing configuration files:
	
	I1124 09:05:05.091296  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:05:05.100600  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:05:05.100657  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:05:05.110555  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:05:05.119216  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:05:05.119288  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:05:05.127876  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:05:05.136154  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:05:05.136205  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:05:05.145077  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:05:05.154290  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:05:05.154338  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:05:05.162702  696018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:05:05.200662  696018 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:05:05.200757  696018 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:05:05.269623  696018 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:05:05.269714  696018 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:05:05.269770  696018 kubeadm.go:319] OS: Linux
	I1124 09:05:05.269842  696018 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:05:05.269920  696018 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:05:05.270003  696018 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:05:05.270084  696018 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:05:05.270155  696018 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:05:05.270223  696018 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:05:05.270303  696018 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:05:05.270377  696018 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:05:05.332844  696018 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:05:05.332992  696018 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:05:05.333150  696018 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:05:06.734694  696018 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:05:06.738817  696018 out.go:252]   - Generating certificates and keys ...
	I1124 09:05:06.738929  696018 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:05:06.739072  696018 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:05:06.832143  696018 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:05:06.955015  696018 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:05:07.027143  696018 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:05:07.115762  696018 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:05:07.265716  696018 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:05:07.265857  696018 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-820576] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 09:05:07.364684  696018 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:05:07.364865  696018 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-820576] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 09:05:07.523315  696018 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:05:07.590589  696018 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:05:07.746307  696018 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:05:07.746426  696018 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:05:07.869677  696018 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:05:07.978931  696018 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:05:08.053720  696018 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:05:08.085227  696018 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:05:08.160011  696018 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:05:08.160849  696018 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:05:08.165435  696018 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:05:05.923694  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:05.923742  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:06.675415  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:07.175277  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:07.676031  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.174962  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.675088  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:09.175102  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:09.675096  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:10.175027  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:10.675655  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:11.175703  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.166975  696018 out.go:252]   - Booting up control plane ...
	I1124 09:05:08.167117  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:05:08.167189  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:05:08.167816  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:05:08.183769  696018 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:05:08.183936  696018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:05:08.191856  696018 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:05:08.191990  696018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:05:08.192031  696018 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:05:08.308076  696018 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:05:08.308205  696018 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:05:09.309901  696018 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001908715s
	I1124 09:05:09.316051  696018 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:05:09.316157  696018 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 09:05:09.316247  696018 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:05:09.316315  696018 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:05:10.320869  696018 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004644301s
	I1124 09:05:10.832866  696018 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.516703459s
	I1124 09:05:12.317179  696018 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001080604s
	I1124 09:05:12.331544  696018 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:05:12.339378  696018 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:05:12.347526  696018 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:05:12.347705  696018 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-820576 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:05:12.354657  696018 kubeadm.go:319] [bootstrap-token] Using token: awoygq.wealvtzys3befsou
	I1124 09:05:12.355757  696018 out.go:252]   - Configuring RBAC rules ...
	I1124 09:05:12.355888  696018 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:05:12.359613  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:05:12.364202  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:05:12.366491  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:05:12.369449  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:05:12.371508  696018 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:05:12.722783  696018 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:05:13.137535  696018 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:05:13.723038  696018 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:05:13.724197  696018 kubeadm.go:319] 
	I1124 09:05:13.724302  696018 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:05:13.724317  696018 kubeadm.go:319] 
	I1124 09:05:13.724412  696018 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:05:13.724424  696018 kubeadm.go:319] 
	I1124 09:05:13.724520  696018 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:05:13.724630  696018 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:05:13.724716  696018 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:05:13.724730  696018 kubeadm.go:319] 
	I1124 09:05:13.724818  696018 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:05:13.724831  696018 kubeadm.go:319] 
	I1124 09:05:13.724897  696018 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:05:13.724906  696018 kubeadm.go:319] 
	I1124 09:05:13.724990  696018 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:05:13.725105  696018 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:05:13.725212  696018 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:05:13.725221  696018 kubeadm.go:319] 
	I1124 09:05:13.725338  696018 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:05:13.725493  696018 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:05:13.725510  696018 kubeadm.go:319] 
	I1124 09:05:13.725601  696018 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token awoygq.wealvtzys3befsou \
	I1124 09:05:13.725765  696018 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:05:13.725804  696018 kubeadm.go:319] 	--control-plane 
	I1124 09:05:13.725816  696018 kubeadm.go:319] 
	I1124 09:05:13.725934  696018 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:05:13.725944  696018 kubeadm.go:319] 
	I1124 09:05:13.726041  696018 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token awoygq.wealvtzys3befsou \
	I1124 09:05:13.726243  696018 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:05:13.728504  696018 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:05:13.728661  696018 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:05:13.728704  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:05:13.728716  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:13.730529  696018 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:05:10.924882  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:10.924923  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.109506  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:47578->192.168.76.2:8443: read: connection reset by peer
	I1124 09:05:11.421112  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.421646  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:11.920950  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.921496  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:12.421219  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:12.421692  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:12.921430  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:12.921911  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:13.420431  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:13.420926  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:13.920542  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:13.921060  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:14.420434  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:14.420859  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:11.675776  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:12.175192  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:12.675267  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.175941  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.675281  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.175267  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.675185  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.175391  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.675966  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.175887  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.675144  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.175281  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.260591  695520 kubeadm.go:1114] duration metric: took 12.168846115s to wait for elevateKubeSystemPrivileges
	I1124 09:05:17.260625  695520 kubeadm.go:403] duration metric: took 22.275566194s to StartCluster
	I1124 09:05:17.260655  695520 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:17.260738  695520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:05:17.261860  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:17.262121  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:05:17.262124  695520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:05:17.262197  695520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:05:17.262308  695520 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-128377"
	I1124 09:05:17.262334  695520 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-128377"
	I1124 09:05:17.262358  695520 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:05:17.262376  695520 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:05:17.262351  695520 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-128377"
	I1124 09:05:17.262443  695520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-128377"
	I1124 09:05:17.262844  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.263075  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.263365  695520 out.go:179] * Verifying Kubernetes components...
	I1124 09:05:17.264408  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:17.287510  695520 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-128377"
	I1124 09:05:17.287559  695520 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:05:17.287978  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.288769  695520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:13.732137  696018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:05:13.737711  696018 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1124 09:05:13.737726  696018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:05:13.752118  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:05:13.951744  696018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:05:13.951795  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.951847  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-820576 minikube.k8s.io/updated_at=2025_11_24T09_05_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=no-preload-820576 minikube.k8s.io/primary=true
	I1124 09:05:13.962047  696018 ops.go:34] apiserver oom_adj: -16
	I1124 09:05:14.022754  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.523671  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.023231  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.523083  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.023230  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.523666  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.022940  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.523444  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.290230  695520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:17.290253  695520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:05:17.290314  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:05:17.317679  695520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:17.317704  695520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:05:17.317768  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:05:17.319048  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:05:17.343853  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:05:17.366525  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:05:17.411998  695520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:17.447003  695520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:17.463082  695520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:17.632983  695520 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 09:05:17.634312  695520 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:05:17.888856  695520 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:05:18.022851  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:18.523601  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:18.589169  696018 kubeadm.go:1114] duration metric: took 4.637423043s to wait for elevateKubeSystemPrivileges
	I1124 09:05:18.589209  696018 kubeadm.go:403] duration metric: took 13.557597169s to StartCluster
	I1124 09:05:18.589237  696018 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:18.589321  696018 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:05:18.590747  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:18.590988  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:05:18.591000  696018 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:05:18.591095  696018 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:05:18.591206  696018 addons.go:70] Setting storage-provisioner=true in profile "no-preload-820576"
	I1124 09:05:18.591219  696018 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:05:18.591236  696018 addons.go:239] Setting addon storage-provisioner=true in "no-preload-820576"
	I1124 09:05:18.591251  696018 addons.go:70] Setting default-storageclass=true in profile "no-preload-820576"
	I1124 09:05:18.591275  696018 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:05:18.591283  696018 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820576"
	I1124 09:05:18.591664  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.591855  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.592299  696018 out.go:179] * Verifying Kubernetes components...
	I1124 09:05:18.593599  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:18.615163  696018 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:18.615451  696018 addons.go:239] Setting addon default-storageclass=true in "no-preload-820576"
	I1124 09:05:18.615530  696018 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:05:18.615851  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.616223  696018 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:18.616245  696018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:05:18.616301  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:05:18.646443  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:05:18.647885  696018 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:18.647963  696018 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:05:18.648059  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:05:18.675529  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:05:18.685797  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:05:18.752704  696018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:18.775922  696018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:18.800792  696018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:18.878758  696018 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 09:05:18.880873  696018 node_ready.go:35] waiting up to 6m0s for node "no-preload-820576" to be "Ready" ...
	I1124 09:05:19.096304  696018 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:05:14.921188  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:14.921633  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:15.421327  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:15.421818  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:15.920573  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:15.921034  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:16.421282  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:16.421841  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:16.921386  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:16.921942  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.420551  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:17.421007  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.920666  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:17.921181  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:18.420539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:18.421011  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:18.920611  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:18.921079  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:19.420539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:19.421004  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.889849  695520 addons.go:530] duration metric: took 627.656763ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:05:18.137738  695520 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-128377" context rescaled to 1 replicas
	W1124 09:05:19.637948  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	I1124 09:05:19.097398  696018 addons.go:530] duration metric: took 506.310963ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:05:19.383938  696018 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-820576" context rescaled to 1 replicas
	W1124 09:05:20.884989  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:19.920806  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:19.921207  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:20.420831  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:20.421312  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:20.920613  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:20.921185  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:21.420832  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:21.421240  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:21.920531  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:21.921019  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:22.420552  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:05:21.638057  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:23.638668  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:26.137883  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:23.383937  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:25.384443  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:27.421276  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:27.421318  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:05:28.138098  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:30.638120  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:27.884284  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:29.884474  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:32.384199  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:31.637332  695520 node_ready.go:49] node "old-k8s-version-128377" is "Ready"
	I1124 09:05:31.637368  695520 node_ready.go:38] duration metric: took 14.003009675s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:05:31.637385  695520 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:05:31.637443  695520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:05:31.650126  695520 api_server.go:72] duration metric: took 14.387953281s to wait for apiserver process to appear ...
	I1124 09:05:31.650156  695520 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:05:31.650179  695520 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:05:31.654078  695520 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:05:31.655253  695520 api_server.go:141] control plane version: v1.28.0
	I1124 09:05:31.655280  695520 api_server.go:131] duration metric: took 5.117021ms to wait for apiserver health ...
	I1124 09:05:31.655289  695520 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:05:31.658830  695520 system_pods.go:59] 8 kube-system pods found
	I1124 09:05:31.658868  695520 system_pods.go:61] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.658877  695520 system_pods.go:61] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.658889  695520 system_pods.go:61] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.658895  695520 system_pods.go:61] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.658906  695520 system_pods.go:61] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.658910  695520 system_pods.go:61] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.658916  695520 system_pods.go:61] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.658921  695520 system_pods.go:61] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.658927  695520 system_pods.go:74] duration metric: took 3.632262ms to wait for pod list to return data ...
	I1124 09:05:31.658936  695520 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:05:31.660923  695520 default_sa.go:45] found service account: "default"
	I1124 09:05:31.660942  695520 default_sa.go:55] duration metric: took 2.000088ms for default service account to be created ...
	I1124 09:05:31.660950  695520 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:05:31.664223  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:31.664263  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.664272  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.664280  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.664284  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.664287  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.664291  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.664294  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.664300  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.664333  695520 retry.go:31] will retry after 195.108791ms: missing components: kube-dns
	I1124 09:05:31.863438  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:31.863494  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.863505  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.863515  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.863520  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.863525  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.863528  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.863540  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.863557  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.863579  695520 retry.go:31] will retry after 244.252087ms: missing components: kube-dns
	I1124 09:05:32.111547  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:32.111586  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:32.111595  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:32.111603  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:32.111608  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:32.111614  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:32.111628  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:32.111634  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:32.111641  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:32.111660  695520 retry.go:31] will retry after 471.342676ms: missing components: kube-dns
	I1124 09:05:32.587354  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:32.587384  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Running
	I1124 09:05:32.587389  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:32.587393  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:32.587397  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:32.587402  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:32.587405  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:32.587408  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:32.587411  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Running
	I1124 09:05:32.587420  695520 system_pods.go:126] duration metric: took 926.463548ms to wait for k8s-apps to be running ...
	I1124 09:05:32.587428  695520 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:05:32.587503  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:32.602305  695520 system_svc.go:56] duration metric: took 14.864147ms WaitForService to wait for kubelet
	I1124 09:05:32.602336  695520 kubeadm.go:587] duration metric: took 15.340181249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:05:32.602385  695520 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:05:32.605212  695520 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:05:32.605242  695520 node_conditions.go:123] node cpu capacity is 8
	I1124 09:05:32.605271  695520 node_conditions.go:105] duration metric: took 2.87532ms to run NodePressure ...
	I1124 09:05:32.605293  695520 start.go:242] waiting for startup goroutines ...
	I1124 09:05:32.605308  695520 start.go:247] waiting for cluster config update ...
	I1124 09:05:32.605327  695520 start.go:256] writing updated cluster config ...
	I1124 09:05:32.605690  695520 ssh_runner.go:195] Run: rm -f paused
	I1124 09:05:32.610319  695520 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:32.614557  695520 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.619322  695520 pod_ready.go:94] pod "coredns-5dd5756b68-vxxnm" is "Ready"
	I1124 09:05:32.619349  695520 pod_ready.go:86] duration metric: took 4.765973ms for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.622417  695520 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.626873  695520 pod_ready.go:94] pod "etcd-old-k8s-version-128377" is "Ready"
	I1124 09:05:32.626900  695520 pod_ready.go:86] duration metric: took 4.45394ms for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.629800  695520 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.634310  695520 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-128377" is "Ready"
	I1124 09:05:32.634338  695520 pod_ready.go:86] duration metric: took 4.514426ms for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.637382  695520 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.015375  695520 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-128377" is "Ready"
	I1124 09:05:33.015406  695520 pod_ready.go:86] duration metric: took 378.000797ms for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.215146  695520 pod_ready.go:83] waiting for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.614362  695520 pod_ready.go:94] pod "kube-proxy-fpbs2" is "Ready"
	I1124 09:05:33.614392  695520 pod_ready.go:86] duration metric: took 399.215049ms for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.815166  695520 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.214969  695520 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-128377" is "Ready"
	I1124 09:05:34.214999  695520 pod_ready.go:86] duration metric: took 399.806564ms for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.215011  695520 pod_ready.go:40] duration metric: took 1.604660669s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:34.261989  695520 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:05:34.263612  695520 out.go:203] 
	W1124 09:05:34.264723  695520 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:05:34.265770  695520 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:05:34.267170  695520 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-128377" cluster and "default" namespace by default
	I1124 09:05:32.422898  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:32.423021  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:05:32.423106  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:05:32.453902  685562 cri.go:89] found id: "1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:05:32.453922  685562 cri.go:89] found id: "4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	I1124 09:05:32.453927  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:05:32.453929  685562 cri.go:89] found id: ""
	I1124 09:05:32.453937  685562 logs.go:282] 3 containers: [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:05:32.454000  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.458469  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.462439  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.466262  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:05:32.466335  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:05:32.496086  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:05:32.496112  685562 cri.go:89] found id: ""
	I1124 09:05:32.496122  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:05:32.496186  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.500443  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:05:32.500532  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:05:32.528567  685562 cri.go:89] found id: ""
	I1124 09:05:32.528602  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.528610  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:05:32.528617  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:05:32.528677  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:05:32.557355  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:05:32.557375  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:05:32.557379  685562 cri.go:89] found id: ""
	I1124 09:05:32.557388  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:05:32.557445  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.561666  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.565691  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:05:32.565776  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:05:32.594818  685562 cri.go:89] found id: ""
	I1124 09:05:32.594841  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.594848  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:05:32.594855  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:05:32.594900  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:05:32.625049  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:05:32.625068  685562 cri.go:89] found id: "87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0"
	I1124 09:05:32.625073  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:05:32.625078  685562 cri.go:89] found id: ""
	I1124 09:05:32.625087  685562 logs.go:282] 3 containers: [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d 87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:05:32.625142  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.630042  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.634965  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.639315  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:05:32.639376  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:05:32.669355  685562 cri.go:89] found id: ""
	I1124 09:05:32.669384  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.669392  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:05:32.669398  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:05:32.669449  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:05:32.697559  685562 cri.go:89] found id: ""
	I1124 09:05:32.697586  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.697596  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:05:32.697610  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:05:32.697645  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:05:32.736120  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:05:32.736153  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:05:32.768484  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:05:32.768526  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:05:32.836058  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:05:32.836100  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:05:32.853541  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:05:32.853613  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 09:05:33.384739  696018 node_ready.go:49] node "no-preload-820576" is "Ready"
	I1124 09:05:33.384778  696018 node_ready.go:38] duration metric: took 14.503869435s for node "no-preload-820576" to be "Ready" ...
	I1124 09:05:33.384797  696018 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:05:33.384861  696018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:05:33.401268  696018 api_server.go:72] duration metric: took 14.81022929s to wait for apiserver process to appear ...
	I1124 09:05:33.401299  696018 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:05:33.401324  696018 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 09:05:33.406015  696018 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 09:05:33.407175  696018 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:05:33.407215  696018 api_server.go:131] duration metric: took 5.908148ms to wait for apiserver health ...
	I1124 09:05:33.407226  696018 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:05:33.410293  696018 system_pods.go:59] 8 kube-system pods found
	I1124 09:05:33.410331  696018 system_pods.go:61] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.410338  696018 system_pods.go:61] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.410346  696018 system_pods.go:61] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.410352  696018 system_pods.go:61] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.410360  696018 system_pods.go:61] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.410365  696018 system_pods.go:61] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.410369  696018 system_pods.go:61] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.410382  696018 system_pods.go:61] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.410391  696018 system_pods.go:74] duration metric: took 3.156993ms to wait for pod list to return data ...
	I1124 09:05:33.410403  696018 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:05:33.413158  696018 default_sa.go:45] found service account: "default"
	I1124 09:05:33.413182  696018 default_sa.go:55] duration metric: took 2.772178ms for default service account to be created ...
	I1124 09:05:33.413192  696018 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:05:33.416818  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.416849  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.416856  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.416863  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.416868  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.416874  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.416879  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.416884  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.416891  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.416935  696018 retry.go:31] will retry after 275.944352ms: missing components: kube-dns
	I1124 09:05:33.697203  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.697247  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.697259  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.697269  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.697274  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.697285  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.697290  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.697297  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.697304  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.697327  696018 retry.go:31] will retry after 278.68714ms: missing components: kube-dns
	I1124 09:05:33.979933  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.979971  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.979977  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.979984  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.979987  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.979991  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.979994  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.979998  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.980003  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.980020  696018 retry.go:31] will retry after 448.083964ms: missing components: kube-dns
	I1124 09:05:34.432301  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:34.432341  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Running
	I1124 09:05:34.432350  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:34.432355  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:34.432362  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:34.432369  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:34.432374  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:34.432379  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:34.432384  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Running
	I1124 09:05:34.432395  696018 system_pods.go:126] duration metric: took 1.019195458s to wait for k8s-apps to be running ...
	I1124 09:05:34.432410  696018 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:05:34.432534  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:34.451401  696018 system_svc.go:56] duration metric: took 18.978773ms WaitForService to wait for kubelet
	I1124 09:05:34.451444  696018 kubeadm.go:587] duration metric: took 15.860405681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:05:34.451483  696018 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:05:34.454386  696018 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:05:34.454410  696018 node_conditions.go:123] node cpu capacity is 8
	I1124 09:05:34.454427  696018 node_conditions.go:105] duration metric: took 2.938205ms to run NodePressure ...
	I1124 09:05:34.454440  696018 start.go:242] waiting for startup goroutines ...
	I1124 09:05:34.454450  696018 start.go:247] waiting for cluster config update ...
	I1124 09:05:34.454478  696018 start.go:256] writing updated cluster config ...
	I1124 09:05:34.454771  696018 ssh_runner.go:195] Run: rm -f paused
	I1124 09:05:34.459160  696018 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:34.462567  696018 pod_ready.go:83] waiting for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.466303  696018 pod_ready.go:94] pod "coredns-7d764666f9-b6dpn" is "Ready"
	I1124 09:05:34.466324  696018 pod_ready.go:86] duration metric: took 3.738029ms for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.468156  696018 pod_ready.go:83] waiting for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.471750  696018 pod_ready.go:94] pod "etcd-no-preload-820576" is "Ready"
	I1124 09:05:34.471775  696018 pod_ready.go:86] duration metric: took 3.597676ms for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.473507  696018 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.477092  696018 pod_ready.go:94] pod "kube-apiserver-no-preload-820576" is "Ready"
	I1124 09:05:34.477115  696018 pod_ready.go:86] duration metric: took 3.588223ms for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.478724  696018 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.862953  696018 pod_ready.go:94] pod "kube-controller-manager-no-preload-820576" is "Ready"
	I1124 09:05:34.862977  696018 pod_ready.go:86] duration metric: took 384.235741ms for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.063039  696018 pod_ready.go:83] waiting for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.463183  696018 pod_ready.go:94] pod "kube-proxy-vz24l" is "Ready"
	I1124 09:05:35.463217  696018 pod_ready.go:86] duration metric: took 400.149042ms for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.664151  696018 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:36.063590  696018 pod_ready.go:94] pod "kube-scheduler-no-preload-820576" is "Ready"
	I1124 09:05:36.063619  696018 pod_ready.go:86] duration metric: took 399.441074ms for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:36.063632  696018 pod_ready.go:40] duration metric: took 1.604443296s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:36.110852  696018 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:05:36.112796  696018 out.go:179] * Done! kubectl is now configured to use "no-preload-820576" cluster and "default" namespace by default
	I1124 09:05:43.195573  685562 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.341935277s)
	W1124 09:05:43.195644  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:44544->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:44544->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1124 09:05:43.195660  685562 logs.go:123] Gathering logs for kube-apiserver [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365] ...
	I1124 09:05:43.195679  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:05:43.229092  685562 logs.go:123] Gathering logs for kube-apiserver [4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680] ...
	I1124 09:05:43.229122  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	W1124 09:05:43.256709  685562 logs.go:130] failed kube-apiserver [4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:05:43.254237    2218 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found" containerID="4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	time="2025-11-24T09:05:43Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found"
	 output: 
	** stderr ** 
	E1124 09:05:43.254237    2218 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found" containerID="4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	time="2025-11-24T09:05:43Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found"
	
	** /stderr **
	I1124 09:05:43.256732  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:05:43.256745  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:05:43.296899  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:05:43.296933  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:05:43.327780  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:05:43.327805  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:05:43.363107  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:05:43.363150  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:05:43.395896  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:05:43.395929  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:05:43.423650  685562 logs.go:123] Gathering logs for kube-controller-manager [87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0] ...
	I1124 09:05:43.423680  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0"
	I1124 09:05:43.453581  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:05:43.453608  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ba53f9b2ebdff       56cc512116c8f       7 seconds ago       Running             busybox                   0                   831740f12ed9d       busybox                                     default
	1ccff83dea1f3       aa5e3ebc0dfed       12 seconds ago      Running             coredns                   0                   e0449c7605999       coredns-7d764666f9-b6dpn                    kube-system
	372566a488aa6       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   0d4413669c9e7       storage-provisioner                         kube-system
	f013ec6444310       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   fe354f65119b6       kindnet-kvm52                               kube-system
	d11c1a1929cbd       8a4ded35a3eb1       27 seconds ago      Running             kube-proxy                0                   57880ad4cbc75       kube-proxy-vz24l                            kube-system
	3792977e1319f       7bb6219ddab95       37 seconds ago      Running             kube-scheduler            0                   e565b2950cf64       kube-scheduler-no-preload-820576            kube-system
	1cc365be5ed1f       45f3cc72d235f       37 seconds ago      Running             kube-controller-manager   0                   cb2692f06f53c       kube-controller-manager-no-preload-820576   kube-system
	942b50869b3b6       aa9d02839d8de       37 seconds ago      Running             kube-apiserver            0                   e9610922053aa       kube-apiserver-no-preload-820576            kube-system
	0d5c89e98d645       a3e246e9556e9       37 seconds ago      Running             etcd                      0                   169ddc6ab9603       etcd-no-preload-820576                      kube-system
	
	
	==> containerd <==
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.453159820Z" level=info msg="connecting to shim 372566a488aa6257b59eba829cf1e66299ccffe9066320bc512378d4a8f37fc3" address="unix:///run/containerd/s/328d596d67a9c8178c77086cf6bfbb902ebec5e36ed37603d7ba9a85ce28ed2c" protocol=ttrpc version=3
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.458836377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b6dpn,Uid:c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0449c7605999fe2d4dcfd63696b4c675d2ebc7f7eb8c41128d3193b899aee4d\""
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.464455615Z" level=info msg="CreateContainer within sandbox \"e0449c7605999fe2d4dcfd63696b4c675d2ebc7f7eb8c41128d3193b899aee4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.471239221Z" level=info msg="Container 1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.477263883Z" level=info msg="CreateContainer within sandbox \"e0449c7605999fe2d4dcfd63696b4c675d2ebc7f7eb8c41128d3193b899aee4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5\""
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.477734207Z" level=info msg="StartContainer for \"1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5\""
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.478601790Z" level=info msg="connecting to shim 1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5" address="unix:///run/containerd/s/a82eab7c8d1b4c38df30ab62991838299020d1b0af8a8d1b36f581eae59ef54a" protocol=ttrpc version=3
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.510256932Z" level=info msg="StartContainer for \"372566a488aa6257b59eba829cf1e66299ccffe9066320bc512378d4a8f37fc3\" returns successfully"
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.531678403Z" level=info msg="StartContainer for \"1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5\" returns successfully"
	Nov 24 09:05:36 no-preload-820576 containerd[658]: time="2025-11-24T09:05:36.586122875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ed19b18b-e761-4aff-8676-38be0169fca8,Namespace:default,Attempt:0,}"
	Nov 24 09:05:36 no-preload-820576 containerd[658]: time="2025-11-24T09:05:36.625353770Z" level=info msg="connecting to shim 831740f12ed9de73f3f54c86d73b7fad71866782ed9656618d60457d1203d284" address="unix:///run/containerd/s/6f2f0b70df621171749ff830e8c830132481fed0cd60e69bb1fa1cb83a2a46e2" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 09:05:36 no-preload-820576 containerd[658]: time="2025-11-24T09:05:36.692941527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ed19b18b-e761-4aff-8676-38be0169fca8,Namespace:default,Attempt:0,} returns sandbox id \"831740f12ed9de73f3f54c86d73b7fad71866782ed9656618d60457d1203d284\""
	Nov 24 09:05:36 no-preload-820576 containerd[658]: time="2025-11-24T09:05:36.695009096Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.908578564Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.909174070Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.910365584Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.911989078Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.912276385Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.217226885s"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.912311145Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.916279483Z" level=info msg="CreateContainer within sandbox \"831740f12ed9de73f3f54c86d73b7fad71866782ed9656618d60457d1203d284\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.922931578Z" level=info msg="Container ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.928338641Z" level=info msg="CreateContainer within sandbox \"831740f12ed9de73f3f54c86d73b7fad71866782ed9656618d60457d1203d284\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927\""
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.928901777Z" level=info msg="StartContainer for \"ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927\""
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.929746506Z" level=info msg="connecting to shim ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927" address="unix:///run/containerd/s/6f2f0b70df621171749ff830e8c830132481fed0cd60e69bb1fa1cb83a2a46e2" protocol=ttrpc version=3
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.988244447Z" level=info msg="StartContainer for \"ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927\" returns successfully"
	
	
	==> coredns [1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54342 - 36437 "HINFO IN 4736891951819189544.4092727598254416540. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025902362s
	
	
	==> describe nodes <==
	Name:               no-preload-820576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-820576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=no-preload-820576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_05_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:05:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-820576
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:05:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:05:43 +0000   Mon, 24 Nov 2025 09:05:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:05:43 +0000   Mon, 24 Nov 2025 09:05:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:05:43 +0000   Mon, 24 Nov 2025 09:05:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:05:43 +0000   Mon, 24 Nov 2025 09:05:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-820576
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d949245b-a9ed-47a9-91d5-7d5561bd8b90
	  Boot ID:                    f052cd47-57de-4521-b5fb-139979fdced9
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-b6dpn                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-820576                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-kvm52                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-820576             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-820576    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-vz24l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-820576             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-820576 event: Registered Node no-preload-820576 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [0d5c89e98d645bf73cd4c5c3f30b9202f3ec35a62f3f8d3ae062d5d623eccb24] <==
	{"level":"warn","ts":"2025-11-24T09:05:10.214265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.224023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.231321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.239174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.246909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.253281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.260214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.266550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.273527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.282603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.288554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.295211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.301519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.308085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.314261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.321387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.327694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.333832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.339908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.361663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.364933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.371238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.377811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.384070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.431908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34196","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:05:46 up  3:48,  0 user,  load average: 4.43, 3.43, 10.79
	Linux no-preload-820576 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f013ec6444310f79abf35dd005056c59b873c4bea9b56849cc31c4d45f1fd1ea] <==
	I1124 09:05:22.747683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:05:22.747935       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 09:05:22.748082       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:05:22.748098       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:05:22.748121       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:05:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:05:22.952020       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:05:22.952094       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:05:22.952107       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:05:22.952322       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:05:23.353143       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:05:23.353172       1 metrics.go:72] Registering metrics
	I1124 09:05:23.353260       1 controller.go:711] "Syncing nftables rules"
	I1124 09:05:32.951899       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:05:32.951958       1 main.go:301] handling current node
	I1124 09:05:42.952830       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:05:42.952880       1 main.go:301] handling current node
	
	
	==> kube-apiserver [942b50869b3b6efe304af13454ac7bcfcd639ee8d85edb9543534540fab1a5ac] <==
	I1124 09:05:10.909334       1 policy_source.go:248] refreshing policies
	E1124 09:05:10.932548       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1124 09:05:10.981562       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:05:10.985259       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:05:10.985502       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1124 09:05:10.990971       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:05:11.076869       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:05:11.784131       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1124 09:05:11.788179       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:05:11.788196       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1124 09:05:12.209320       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:05:12.246151       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:05:12.285780       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:05:12.290718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 09:05:12.291514       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:05:12.294940       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:05:12.826079       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:05:13.127776       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:05:13.136696       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:05:13.143569       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:05:18.481337       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:05:18.484897       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:05:18.680072       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:05:18.829415       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 09:05:45.392426       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:55526: use of closed network connection
	
	
	==> kube-controller-manager [1cc365be5ed1fbe0ff7cbef3bba9928f6de3ee57c3a2f87a37b5414ce840c1e5] <==
	I1124 09:05:17.652104       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652138       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652152       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652201       1 range_allocator.go:177] "Sending events to api server"
	I1124 09:05:17.652237       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1124 09:05:17.652242       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:05:17.652246       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652814       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652923       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.653920       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.654009       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.654103       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.654638       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.655183       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.655289       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.655391       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.654741       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.656610       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.671052       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-820576" podCIDRs=["10.244.0.0/24"]
	I1124 09:05:17.672326       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.746153       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.746175       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 09:05:17.746182       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 09:05:17.746484       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:37.647634       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [d11c1a1929cbd874879bd2ca658768b3b17486a565a73f3198763d8937ab7159] <==
	I1124 09:05:19.405212       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:05:19.470704       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:05:19.571665       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:19.571707       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 09:05:19.571825       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:05:19.593457       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:05:19.593546       1 server_linux.go:136] "Using iptables Proxier"
	I1124 09:05:19.598806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:05:19.599327       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 09:05:19.599366       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:05:19.601008       1 config.go:200] "Starting service config controller"
	I1124 09:05:19.601053       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:05:19.601477       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:05:19.601494       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:05:19.601544       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:05:19.601604       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:05:19.601940       1 config.go:309] "Starting node config controller"
	I1124 09:05:19.601962       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:05:19.701650       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:05:19.701674       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:05:19.701701       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:05:19.702186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3792977e1319f5110036c4177368941dfeff0808bfb81b4f1f9accba9dc895b0] <==
	E1124 09:05:10.834797       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 09:05:10.834808       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1124 09:05:10.834939       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1124 09:05:10.835008       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 09:05:11.768737       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 09:05:11.770023       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1124 09:05:11.806172       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1124 09:05:11.807198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 09:05:11.842020       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1124 09:05:11.843143       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1124 09:05:11.962537       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1124 09:05:11.963477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 09:05:11.963483       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 09:05:11.963611       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 09:05:11.964324       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1124 09:05:11.964442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1124 09:05:11.969522       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1124 09:05:11.970454       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 09:05:12.020752       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1124 09:05:12.021838       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 09:05:12.026929       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1124 09:05:12.028011       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 09:05:12.052338       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1124 09:05:12.053203       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1124 09:05:14.726256       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 09:05:18 no-preload-820576 kubelet[2188]: I1124 09:05:18.885392    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9hq\" (UniqueName: \"kubernetes.io/projected/967c23e8-7e42-4034-b5a2-e4cd65bc4d94-kube-api-access-vf9hq\") pod \"kindnet-kvm52\" (UID: \"967c23e8-7e42-4034-b5a2-e4cd65bc4d94\") " pod="kube-system/kindnet-kvm52"
	Nov 24 09:05:18 no-preload-820576 kubelet[2188]: I1124 09:05:18.885446    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a64a474-1e1b-411d-aea6-9d12e1d9f84e-xtables-lock\") pod \"kube-proxy-vz24l\" (UID: \"4a64a474-1e1b-411d-aea6-9d12e1d9f84e\") " pod="kube-system/kube-proxy-vz24l"
	Nov 24 09:05:18 no-preload-820576 kubelet[2188]: I1124 09:05:18.885493    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/967c23e8-7e42-4034-b5a2-e4cd65bc4d94-lib-modules\") pod \"kindnet-kvm52\" (UID: \"967c23e8-7e42-4034-b5a2-e4cd65bc4d94\") " pod="kube-system/kindnet-kvm52"
	Nov 24 09:05:18 no-preload-820576 kubelet[2188]: I1124 09:05:18.885515    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwg2f\" (UniqueName: \"kubernetes.io/projected/4a64a474-1e1b-411d-aea6-9d12e1d9f84e-kube-api-access-gwg2f\") pod \"kube-proxy-vz24l\" (UID: \"4a64a474-1e1b-411d-aea6-9d12e1d9f84e\") " pod="kube-system/kube-proxy-vz24l"
	Nov 24 09:05:20 no-preload-820576 kubelet[2188]: I1124 09:05:20.009606    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-vz24l" podStartSLOduration=2.009575988 podStartE2EDuration="2.009575988s" podCreationTimestamp="2025-11-24 09:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:20.009405568 +0000 UTC m=+7.132094701" watchObservedRunningTime="2025-11-24 09:05:20.009575988 +0000 UTC m=+7.132265063"
	Nov 24 09:05:20 no-preload-820576 kubelet[2188]: E1124 09:05:20.073715    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-820576" containerName="etcd"
	Nov 24 09:05:20 no-preload-820576 kubelet[2188]: E1124 09:05:20.442119    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-820576" containerName="kube-apiserver"
	Nov 24 09:05:22 no-preload-820576 kubelet[2188]: E1124 09:05:22.827379    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-820576" containerName="kube-scheduler"
	Nov 24 09:05:23 no-preload-820576 kubelet[2188]: I1124 09:05:23.021998    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-kvm52" podStartSLOduration=2.048567637 podStartE2EDuration="5.021980117s" podCreationTimestamp="2025-11-24 09:05:18 +0000 UTC" firstStartedPulling="2025-11-24 09:05:19.465760669 +0000 UTC m=+6.588449726" lastFinishedPulling="2025-11-24 09:05:22.439173133 +0000 UTC m=+9.561862206" observedRunningTime="2025-11-24 09:05:23.021631445 +0000 UTC m=+10.144320521" watchObservedRunningTime="2025-11-24 09:05:23.021980117 +0000 UTC m=+10.144669192"
	Nov 24 09:05:24 no-preload-820576 kubelet[2188]: E1124 09:05:24.962071    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-820576" containerName="kube-controller-manager"
	Nov 24 09:05:30 no-preload-820576 kubelet[2188]: E1124 09:05:30.074006    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-820576" containerName="etcd"
	Nov 24 09:05:30 no-preload-820576 kubelet[2188]: E1124 09:05:30.448408    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-820576" containerName="kube-apiserver"
	Nov 24 09:05:32 no-preload-820576 kubelet[2188]: E1124 09:05:32.832618    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-820576" containerName="kube-scheduler"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.014716    2188 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.095714    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/144d237b-4f80-441d-867b-0ee26edd8590-tmp\") pod \"storage-provisioner\" (UID: \"144d237b-4f80-441d-867b-0ee26edd8590\") " pod="kube-system/storage-provisioner"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.095760    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr4ms\" (UniqueName: \"kubernetes.io/projected/144d237b-4f80-441d-867b-0ee26edd8590-kube-api-access-qr4ms\") pod \"storage-provisioner\" (UID: \"144d237b-4f80-441d-867b-0ee26edd8590\") " pod="kube-system/storage-provisioner"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.095795    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1-config-volume\") pod \"coredns-7d764666f9-b6dpn\" (UID: \"c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1\") " pod="kube-system/coredns-7d764666f9-b6dpn"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.095897    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nf2r\" (UniqueName: \"kubernetes.io/projected/c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1-kube-api-access-4nf2r\") pod \"coredns-7d764666f9-b6dpn\" (UID: \"c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1\") " pod="kube-system/coredns-7d764666f9-b6dpn"
	Nov 24 09:05:34 no-preload-820576 kubelet[2188]: E1124 09:05:34.029028    2188 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6dpn" containerName="coredns"
	Nov 24 09:05:34 no-preload-820576 kubelet[2188]: I1124 09:05:34.041906    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-b6dpn" podStartSLOduration=16.041889167 podStartE2EDuration="16.041889167s" podCreationTimestamp="2025-11-24 09:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:34.041715133 +0000 UTC m=+21.164404209" watchObservedRunningTime="2025-11-24 09:05:34.041889167 +0000 UTC m=+21.164578242"
	Nov 24 09:05:34 no-preload-820576 kubelet[2188]: I1124 09:05:34.051548    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.051533177 podStartE2EDuration="15.051533177s" podCreationTimestamp="2025-11-24 09:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:34.051306469 +0000 UTC m=+21.173995547" watchObservedRunningTime="2025-11-24 09:05:34.051533177 +0000 UTC m=+21.174222253"
	Nov 24 09:05:35 no-preload-820576 kubelet[2188]: E1124 09:05:35.033151    2188 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6dpn" containerName="coredns"
	Nov 24 09:05:36 no-preload-820576 kubelet[2188]: E1124 09:05:36.035006    2188 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6dpn" containerName="coredns"
	Nov 24 09:05:36 no-preload-820576 kubelet[2188]: I1124 09:05:36.313607    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlkv\" (UniqueName: \"kubernetes.io/projected/ed19b18b-e761-4aff-8676-38be0169fca8-kube-api-access-knlkv\") pod \"busybox\" (UID: \"ed19b18b-e761-4aff-8676-38be0169fca8\") " pod="default/busybox"
	Nov 24 09:05:39 no-preload-820576 kubelet[2188]: I1124 09:05:39.053972    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.835569242 podStartE2EDuration="3.053954912s" podCreationTimestamp="2025-11-24 09:05:36 +0000 UTC" firstStartedPulling="2025-11-24 09:05:36.694661156 +0000 UTC m=+23.817350210" lastFinishedPulling="2025-11-24 09:05:38.913046824 +0000 UTC m=+26.035735880" observedRunningTime="2025-11-24 09:05:39.05362003 +0000 UTC m=+26.176309106" watchObservedRunningTime="2025-11-24 09:05:39.053954912 +0000 UTC m=+26.176643986"
	
	
	==> storage-provisioner [372566a488aa6257b59eba829cf1e66299ccffe9066320bc512378d4a8f37fc3] <==
	I1124 09:05:33.518708       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:05:33.526921       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:05:33.526973       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:05:33.529762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:33.539875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:05:33.540034       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:05:33.540191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe9f1dac-6d1b-487a-9248-5f6453109d6b", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-820576_3a08bfb4-c7fa-4df8-97c3-4cc5a96f0994 became leader
	I1124 09:05:33.540287       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-820576_3a08bfb4-c7fa-4df8-97c3-4cc5a96f0994!
	W1124 09:05:33.542787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:33.546559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:05:33.641082       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-820576_3a08bfb4-c7fa-4df8-97c3-4cc5a96f0994!
	W1124 09:05:35.550005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:35.554075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:37.557403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:37.561227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:39.565032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:39.568902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:41.571752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:41.575652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:43.578893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:43.583135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:45.586509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:45.591565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820576 -n no-preload-820576
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-820576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-820576
helpers_test.go:243: (dbg) docker inspect no-preload-820576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2",
	        "Created": "2025-11-24T09:04:50.428873291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696697,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:04:50.865515581Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2/hosts",
	        "LogPath": "/var/lib/docker/containers/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2/fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2-json.log",
	        "Name": "/no-preload-820576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-820576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-820576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fbfc76af5db1b5ac496f820bea869349ea04d6bdec6b38f5e5f2d7ed76e9e0e2",
	                "LowerDir": "/var/lib/docker/overlay2/cef831c44676981960379b41c7a7ce597355fd430968301d8adaa7f1c89ecabf-init/diff:/var/lib/docker/overlay2/a062700147ad5d1f8f2a68f70ba6ad34ea6495dd365bcb265ab17ea27961837b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cef831c44676981960379b41c7a7ce597355fd430968301d8adaa7f1c89ecabf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cef831c44676981960379b41c7a7ce597355fd430968301d8adaa7f1c89ecabf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cef831c44676981960379b41c7a7ce597355fd430968301d8adaa7f1c89ecabf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-820576",
	                "Source": "/var/lib/docker/volumes/no-preload-820576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-820576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-820576",
	                "name.minikube.sigs.k8s.io": "no-preload-820576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d00e2266ea6274ea021af231036b967845b3499983d5775fb4cea7d5b1677a4e",
	            "SandboxKey": "/var/run/docker/netns/d00e2266ea62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-820576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7957ce7dc9aefa9cad531fe591f93551c8388eaf00488d017c6e11e46821fce7",
	                    "EndpointID": "da19cc42121dc67bd6d32b5462f319359aedb02efd9ff5344a89232e1394cff6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "52:15:8b:bd:8c:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-820576",
	                        "fbfc76af5db1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820576 -n no-preload-820576
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-820576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-820576 logs -n 25: (1.166133094s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-203355 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                              │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                              │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p missing-upgrade-058813                                                                                                                                                                                                                           │ missing-upgrade-058813 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ ssh     │ -p cilium-203355 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo docker system info                                                                                                                                                                                                            │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo containerd config dump                                                                                                                                                                                                        │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo crio config                                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p cilium-203355                                                                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ start   │ -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ start   │ -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-128377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:04:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:04:47.686335  696018 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:04:47.686445  696018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:47.686456  696018 out.go:374] Setting ErrFile to fd 2...
	I1124 09:04:47.686474  696018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:47.686683  696018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:04:47.687133  696018 out.go:368] Setting JSON to false
	I1124 09:04:47.688408  696018 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13624,"bootTime":1763961464,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:04:47.688532  696018 start.go:143] virtualization: kvm guest
	I1124 09:04:47.690354  696018 out.go:179] * [no-preload-820576] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:04:47.691472  696018 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:04:47.691501  696018 notify.go:221] Checking for updates...
	I1124 09:04:47.693590  696018 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:04:47.694681  696018 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:04:47.695683  696018 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:04:47.697109  696018 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:04:47.698248  696018 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:04:47.699807  696018 config.go:182] Loaded profile config "cert-expiration-869306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:04:47.699947  696018 config.go:182] Loaded profile config "kubernetes-upgrade-521313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:47.700091  696018 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:04:47.700236  696018 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:04:47.724639  696018 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:04:47.724770  696018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:47.791833  696018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 09:04:47.780432821 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:47.791998  696018 docker.go:319] overlay module found
	I1124 09:04:47.794089  696018 out.go:179] * Using the docker driver based on user configuration
	I1124 09:04:47.795621  696018 start.go:309] selected driver: docker
	I1124 09:04:47.795639  696018 start.go:927] validating driver "docker" against <nil>
	I1124 09:04:47.795651  696018 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:04:47.796325  696018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:47.859511  696018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 09:04:47.848833175 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:47.859748  696018 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:04:47.859957  696018 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:04:47.861778  696018 out.go:179] * Using Docker driver with root privileges
	I1124 09:04:47.862632  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:04:47.862696  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:04:47.862708  696018 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:04:47.862775  696018 start.go:353] cluster config:
	{Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:04:47.863875  696018 out.go:179] * Starting "no-preload-820576" primary control-plane node in "no-preload-820576" cluster
	I1124 09:04:47.864812  696018 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 09:04:47.865865  696018 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:04:47.866835  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:04:47.866921  696018 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:04:47.866958  696018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:04:47.867001  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json: {Name:mk04f43d651118a00ac1be32029cffb149669d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:47.867208  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:47.890231  696018 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:04:47.890260  696018 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:04:47.890281  696018 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:04:47.890321  696018 start.go:360] acquireMachinesLock for no-preload-820576: {Name:mk6b6fb581999217c645edacaa9c18971e97964f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:47.890432  696018 start.go:364] duration metric: took 88.402µs to acquireMachinesLock for "no-preload-820576"
	I1124 09:04:47.890474  696018 start.go:93] Provisioning new machine with config: &{Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:04:47.890567  696018 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:04:48.739369  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:40906->192.168.76.2:8443: read: connection reset by peer
	I1124 09:04:48.739430  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:48.740184  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:48.920539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:48.921019  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:49.420530  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:49.420996  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:46.813535  695520 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:04:46.813778  695520 start.go:159] libmachine.API.Create for "old-k8s-version-128377" (driver="docker")
	I1124 09:04:46.813816  695520 client.go:173] LocalClient.Create starting
	I1124 09:04:46.813892  695520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:04:46.813936  695520 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:46.813967  695520 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:46.814043  695520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:04:46.814076  695520 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:46.814095  695520 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:46.814441  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:04:46.831913  695520 cli_runner.go:211] docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:04:46.831996  695520 network_create.go:284] running [docker network inspect old-k8s-version-128377] to gather additional debugging logs...
	I1124 09:04:46.832018  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377
	W1124 09:04:46.848875  695520 cli_runner.go:211] docker network inspect old-k8s-version-128377 returned with exit code 1
	I1124 09:04:46.848912  695520 network_create.go:287] error running [docker network inspect old-k8s-version-128377]: docker network inspect old-k8s-version-128377: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-128377 not found
	I1124 09:04:46.848928  695520 network_create.go:289] output of [docker network inspect old-k8s-version-128377]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-128377 not found
	
	** /stderr **
	I1124 09:04:46.849044  695520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:46.866840  695520 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:04:46.867443  695520 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:04:46.868124  695520 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:04:46.868877  695520 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:04:46.869272  695520 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9bf62793deff IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0a:d1:a9:3b:89:29} reservation:<nil>}
	I1124 09:04:46.869983  695520 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5fa0f78c53ad IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:9e:96:d6:0a:fe:a6} reservation:<nil>}
	I1124 09:04:46.870809  695520 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e158e0}
	I1124 09:04:46.870832  695520 network_create.go:124] attempt to create docker network old-k8s-version-128377 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 09:04:46.870880  695520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-128377 old-k8s-version-128377
	I1124 09:04:46.993201  695520 network_create.go:108] docker network old-k8s-version-128377 192.168.103.0/24 created
	I1124 09:04:46.993243  695520 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-128377" container
	I1124 09:04:46.993321  695520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:04:47.015308  695520 cli_runner.go:164] Run: docker volume create old-k8s-version-128377 --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:04:47.034791  695520 oci.go:103] Successfully created a docker volume old-k8s-version-128377
	I1124 09:04:47.034869  695520 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-128377-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --entrypoint /usr/bin/test -v old-k8s-version-128377:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:04:47.772927  695520 oci.go:107] Successfully prepared a docker volume old-k8s-version-128377
	I1124 09:04:47.773023  695520 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:04:47.773041  695520 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:04:47.773133  695520 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-128377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:04:50.987600  695520 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-128377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.214396647s)
	I1124 09:04:50.987639  695520 kic.go:203] duration metric: took 3.214593361s to extract preloaded images to volume ...
	W1124 09:04:50.987789  695520 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:04:50.987849  695520 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:04:50.987920  695520 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:04:51.061728  695520 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-128377 --name old-k8s-version-128377 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-128377 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-128377 --network old-k8s-version-128377 --ip 192.168.103.2 --volume old-k8s-version-128377:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:04:51.401514  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Running}}
	I1124 09:04:51.426748  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.456228  695520 cli_runner.go:164] Run: docker exec old-k8s-version-128377 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:04:51.515517  695520 oci.go:144] the created container "old-k8s-version-128377" has a running status.
	I1124 09:04:51.515571  695520 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa...
	I1124 09:04:47.893309  696018 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:04:47.893645  696018 start.go:159] libmachine.API.Create for "no-preload-820576" (driver="docker")
	I1124 09:04:47.893687  696018 client.go:173] LocalClient.Create starting
	I1124 09:04:47.893789  696018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:04:47.893833  696018 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:47.893861  696018 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:47.893953  696018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:04:47.893982  696018 main.go:143] libmachine: Decoding PEM data...
	I1124 09:04:47.893999  696018 main.go:143] libmachine: Parsing certificate...
	I1124 09:04:47.894436  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:04:47.915789  696018 cli_runner.go:211] docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:04:47.915886  696018 network_create.go:284] running [docker network inspect no-preload-820576] to gather additional debugging logs...
	I1124 09:04:47.915925  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576
	W1124 09:04:47.939725  696018 cli_runner.go:211] docker network inspect no-preload-820576 returned with exit code 1
	I1124 09:04:47.939760  696018 network_create.go:287] error running [docker network inspect no-preload-820576]: docker network inspect no-preload-820576: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-820576 not found
	I1124 09:04:47.939788  696018 network_create.go:289] output of [docker network inspect no-preload-820576]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-820576 not found
	
	** /stderr **
	I1124 09:04:47.939956  696018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:47.960368  696018 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:04:47.961456  696018 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:04:47.962397  696018 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:04:47.963597  696018 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:04:47.964832  696018 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e9cf50}
	I1124 09:04:47.964868  696018 network_create.go:124] attempt to create docker network no-preload-820576 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 09:04:47.964929  696018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-820576 no-preload-820576
	I1124 09:04:48.017684  696018 network_create.go:108] docker network no-preload-820576 192.168.85.0/24 created
	I1124 09:04:48.017725  696018 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-820576" container
	I1124 09:04:48.017804  696018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:04:48.037793  696018 cli_runner.go:164] Run: docker volume create no-preload-820576 --label name.minikube.sigs.k8s.io=no-preload-820576 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:04:48.057638  696018 oci.go:103] Successfully created a docker volume no-preload-820576
	I1124 09:04:48.057738  696018 cli_runner.go:164] Run: docker run --rm --name no-preload-820576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --entrypoint /usr/bin/test -v no-preload-820576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:04:48.192090  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:48.509962  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:48.827547  696018 cache.go:107] acquiring lock: {Name:mkbcabeb5a23ff077ffdad64c71e9fe699d94040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827544  696018 cache.go:107] acquiring lock: {Name:mk92c82896924ab47423467b25ccd98ee4128baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827656  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:04:48.827672  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:04:48.827672  696018 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 138.757µs
	I1124 09:04:48.827689  696018 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:04:48.827683  696018 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 176.678µs
	I1124 09:04:48.827708  696018 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827708  696018 cache.go:107] acquiring lock: {Name:mkf3a006b133f81ed32779d427a8d0a9b25f9000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827735  696018 cache.go:107] acquiring lock: {Name:mkd74819cb24442927f7fb2cffd47478de40e14c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827766  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:04:48.827773  696018 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 69.196µs
	I1124 09:04:48.827780  696018 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:04:48.827788  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:04:48.827796  696018 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 65.204µs
	I1124 09:04:48.827804  696018 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:04:48.827791  696018 cache.go:107] acquiring lock: {Name:mk6b573bbd33cfc3c3f77668030fb064598572fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827820  696018 cache.go:107] acquiring lock: {Name:mk7f052905284f586f4f1cf24b8c34cc48e0b85b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827866  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:04:48.827873  696018 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 57.027µs
	I1124 09:04:48.827882  696018 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827796  696018 cache.go:107] acquiring lock: {Name:mk1d635b72f6d026600360916178f900a450350e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.827887  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:04:48.827900  696018 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 115.907µs
	I1124 09:04:48.827910  696018 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:04:48.827914  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:04:48.827921  696018 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 128.45µs
	I1124 09:04:48.827937  696018 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:04:48.827719  696018 cache.go:107] acquiring lock: {Name:mk8023690ce5b18d9a1789b2f878bf92c1381799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:04:48.828021  696018 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:04:48.828033  696018 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 327.502µs
	I1124 09:04:48.828051  696018 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:04:48.828067  696018 cache.go:87] Successfully saved all images to host disk.
	I1124 09:04:50.353018  696018 cli_runner.go:217] Completed: docker run --rm --name no-preload-820576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --entrypoint /usr/bin/test -v no-preload-820576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.295229864s)
	I1124 09:04:50.353061  696018 oci.go:107] Successfully prepared a docker volume no-preload-820576
	I1124 09:04:50.353130  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1124 09:04:50.353205  696018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:04:50.353233  696018 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:04:50.353275  696018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:04:50.412447  696018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-820576 --name no-preload-820576 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820576 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-820576 --network no-preload-820576 --ip 192.168.85.2 --volume no-preload-820576:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:04:51.174340  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Running}}
	I1124 09:04:51.195074  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.216706  696018 cli_runner.go:164] Run: docker exec no-preload-820576 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:04:51.270513  696018 oci.go:144] the created container "no-preload-820576" has a running status.
	I1124 09:04:51.270555  696018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa...
	I1124 09:04:51.639069  696018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:04:51.669871  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.693409  696018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:04:51.693441  696018 kic_runner.go:114] Args: [docker exec --privileged no-preload-820576 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:04:51.754414  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:04:51.781590  696018 machine.go:94] provisionDockerMachine start ...
	I1124 09:04:51.781685  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:51.808597  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.809054  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:51.809092  696018 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:04:51.963230  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:04:51.963276  696018 ubuntu.go:182] provisioning hostname "no-preload-820576"
	I1124 09:04:51.963339  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:51.984069  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.984406  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:51.984432  696018 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-820576 && echo "no-preload-820576" | sudo tee /etc/hostname
	I1124 09:04:52.142431  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:04:52.142545  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.163141  696018 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.163483  696018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 09:04:52.163520  696018 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820576/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:04:52.313074  696018 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:04:52.313103  696018 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:04:52.313151  696018 ubuntu.go:190] setting up certificates
	I1124 09:04:52.313174  696018 provision.go:84] configureAuth start
	I1124 09:04:52.313241  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.333178  696018 provision.go:143] copyHostCerts
	I1124 09:04:52.333250  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:04:52.333267  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:04:52.333340  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:04:52.333454  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:04:52.333479  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:04:52.333527  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:04:52.333610  696018 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:04:52.333631  696018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:04:52.333670  696018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:04:52.333745  696018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.no-preload-820576 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820576]
	I1124 09:04:52.372869  696018 provision.go:177] copyRemoteCerts
	I1124 09:04:52.372936  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:04:52.372984  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.391516  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.495715  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:04:52.515508  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:04:52.533110  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:04:52.549620  696018 provision.go:87] duration metric: took 236.431147ms to configureAuth
	I1124 09:04:52.549643  696018 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:04:52.549785  696018 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:52.549795  696018 machine.go:97] duration metric: took 768.185276ms to provisionDockerMachine
	I1124 09:04:52.549801  696018 client.go:176] duration metric: took 4.656107804s to LocalClient.Create
	I1124 09:04:52.549817  696018 start.go:167] duration metric: took 4.656176839s to libmachine.API.Create "no-preload-820576"
	I1124 09:04:52.549827  696018 start.go:293] postStartSetup for "no-preload-820576" (driver="docker")
	I1124 09:04:52.549837  696018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:04:52.549917  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:04:52.549957  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.567598  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.670209  696018 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:04:52.673794  696018 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:04:52.673819  696018 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:04:52.673829  696018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:04:52.673873  696018 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:04:52.673954  696018 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:04:52.674055  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:04:52.681571  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:51.668051  695520 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:04:51.701732  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.724111  695520 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:04:51.724139  695520 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-128377 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:04:51.779671  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:04:51.808240  695520 machine.go:94] provisionDockerMachine start ...
	I1124 09:04:51.808514  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:51.833533  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:51.833868  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:51.833890  695520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:04:51.988683  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-128377
	
	I1124 09:04:51.988712  695520 ubuntu.go:182] provisioning hostname "old-k8s-version-128377"
	I1124 09:04:51.988769  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.008953  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.009275  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:52.009299  695520 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-128377 && echo "old-k8s-version-128377" | sudo tee /etc/hostname
	I1124 09:04:52.164712  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-128377
	
	I1124 09:04:52.164811  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.184388  695520 main.go:143] libmachine: Using SSH client type: native
	I1124 09:04:52.184674  695520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 09:04:52.184701  695520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-128377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-128377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-128377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:04:52.328284  695520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:04:52.328315  695520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:04:52.328349  695520 ubuntu.go:190] setting up certificates
	I1124 09:04:52.328371  695520 provision.go:84] configureAuth start
	I1124 09:04:52.328437  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.347382  695520 provision.go:143] copyHostCerts
	I1124 09:04:52.347441  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:04:52.347449  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:04:52.347530  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:04:52.347615  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:04:52.347624  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:04:52.347646  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:04:52.347699  695520 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:04:52.347707  695520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:04:52.347724  695520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:04:52.347767  695520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-128377 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-128377]
	I1124 09:04:52.449836  695520 provision.go:177] copyRemoteCerts
	I1124 09:04:52.449907  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:04:52.449955  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.467389  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.568756  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:04:52.590911  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 09:04:52.608291  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:04:52.625476  695520 provision.go:87] duration metric: took 297.076146ms to configureAuth
	I1124 09:04:52.625501  695520 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:04:52.625684  695520 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:04:52.625697  695520 machine.go:97] duration metric: took 817.329123ms to provisionDockerMachine
	I1124 09:04:52.625703  695520 client.go:176] duration metric: took 5.811878386s to LocalClient.Create
	I1124 09:04:52.625724  695520 start.go:167] duration metric: took 5.811947677s to libmachine.API.Create "old-k8s-version-128377"
	I1124 09:04:52.625737  695520 start.go:293] postStartSetup for "old-k8s-version-128377" (driver="docker")
	I1124 09:04:52.625751  695520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:04:52.625805  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:04:52.625861  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.643125  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.746507  695520 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:04:52.750419  695520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:04:52.750446  695520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:04:52.750471  695520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:04:52.750527  695520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:04:52.750621  695520 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:04:52.750735  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:04:52.759275  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:52.779524  695520 start.go:296] duration metric: took 153.769147ms for postStartSetup
	I1124 09:04:52.779876  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.797331  695520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/config.json ...
	I1124 09:04:52.797607  695520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:04:52.797652  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.814633  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.914421  695520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:04:52.919231  695520 start.go:128] duration metric: took 6.107446039s to createHost
	I1124 09:04:52.919259  695520 start.go:83] releasing machines lock for "old-k8s-version-128377", held for 6.10762389s
	I1124 09:04:52.919326  695520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-128377
	I1124 09:04:52.937920  695520 ssh_runner.go:195] Run: cat /version.json
	I1124 09:04:52.937964  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.937993  695520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:04:52.938073  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:04:52.957005  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:52.957162  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:04:53.162492  695520 ssh_runner.go:195] Run: systemctl --version
	I1124 09:04:53.168749  695520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:04:53.173128  695520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:04:53.173198  695520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:04:53.196703  695520 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:04:53.196732  695520 start.go:496] detecting cgroup driver to use...
	I1124 09:04:53.196770  695520 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:04:53.196824  695520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:04:53.212821  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:04:53.226105  695520 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:04:53.226149  695520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:04:53.245323  695520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:04:53.261892  695520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:04:53.346225  695520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:04:53.440817  695520 docker.go:234] disabling docker service ...
	I1124 09:04:53.440886  695520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:04:53.466043  695520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:04:53.478621  695520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:04:53.566248  695520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:04:53.652228  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:04:53.665204  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:04:53.679300  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 09:04:53.689354  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:04:53.697996  695520 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:04:53.698043  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:04:53.706349  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.715138  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:04:53.724198  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.732594  695520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:04:53.740362  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:04:53.748766  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:04:53.757048  695520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:04:53.765265  695520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:04:53.772343  695520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:04:53.779254  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:53.856087  695520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:04:53.959050  695520 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:04:53.959110  695520 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:04:53.963133  695520 start.go:564] Will wait 60s for crictl version
	I1124 09:04:53.963185  695520 ssh_runner.go:195] Run: which crictl
	I1124 09:04:53.966895  695520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:04:53.994878  695520 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:04:53.994934  695520 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.021265  695520 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.045827  695520 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 09:04:52.701569  696018 start.go:296] duration metric: took 151.731915ms for postStartSetup
	I1124 09:04:52.701858  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.719203  696018 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:04:52.719424  696018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:04:52.719488  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.736084  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.835481  696018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:04:52.840061  696018 start.go:128] duration metric: took 4.94947332s to createHost
	I1124 09:04:52.840083  696018 start.go:83] releasing machines lock for "no-preload-820576", held for 4.94964132s
	I1124 09:04:52.840148  696018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:04:52.858132  696018 ssh_runner.go:195] Run: cat /version.json
	I1124 09:04:52.858160  696018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:04:52.858222  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.858246  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:04:52.877130  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.877482  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:04:52.975607  696018 ssh_runner.go:195] Run: systemctl --version
	I1124 09:04:53.031452  696018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:04:53.036065  696018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:04:53.036130  696018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:04:53.059999  696018 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:04:53.060024  696018 start.go:496] detecting cgroup driver to use...
	I1124 09:04:53.060062  696018 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:04:53.060105  696018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:04:53.074505  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:04:53.086089  696018 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:04:53.086143  696018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:04:53.101555  696018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:04:53.118093  696018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:04:53.204201  696018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:04:53.300933  696018 docker.go:234] disabling docker service ...
	I1124 09:04:53.301034  696018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:04:53.320036  696018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:04:53.331959  696018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:04:53.420508  696018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:04:53.513830  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:04:53.526253  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:04:53.540562  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:53.865082  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:04:53.876277  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:04:53.885584  696018 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:04:53.885655  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:04:53.895158  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.904766  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:04:53.913841  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:04:53.922747  696018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:04:53.932360  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:04:53.943272  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:04:53.952416  696018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:04:53.961850  696018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:04:53.969795  696018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:04:53.977270  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:54.067216  696018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:04:54.151776  696018 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:04:54.151849  696018 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:04:54.156309  696018 start.go:564] Will wait 60s for crictl version
	I1124 09:04:54.156367  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:54.160683  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:04:54.187130  696018 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:04:54.187193  696018 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.208524  696018 ssh_runner.go:195] Run: containerd --version
	I1124 09:04:54.233294  696018 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1124 09:04:49.920675  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:49.921171  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:50.420805  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:50.421212  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:04:50.920534  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:54.046841  695520 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:54.064168  695520 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:04:54.068915  695520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.079411  695520 kubeadm.go:884] updating cluster {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:04:54.079584  695520 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:04:54.079651  695520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:54.105064  695520 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:04:54.105092  695520 containerd.go:534] Images already preloaded, skipping extraction
	I1124 09:04:54.105153  695520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:54.131723  695520 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:04:54.131746  695520 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:04:54.131756  695520 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 containerd true true} ...
	I1124 09:04:54.131858  695520 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-128377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:04:54.131921  695520 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:04:54.160918  695520 cni.go:84] Creating CNI manager for ""
	I1124 09:04:54.160940  695520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:04:54.160955  695520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:04:54.160976  695520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-128377 NodeName:old-k8s-version-128377 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:04:54.161123  695520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-128377"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:04:54.161190  695520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 09:04:54.169102  695520 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:04:54.169150  695520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:04:54.176962  695520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1124 09:04:54.191252  695520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:04:54.206931  695520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1124 09:04:54.220958  695520 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:04:54.225158  695520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.236116  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:04:54.319599  695520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:04:54.342135  695520 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377 for IP: 192.168.103.2
	I1124 09:04:54.342157  695520 certs.go:195] generating shared ca certs ...
	I1124 09:04:54.342176  695520 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.342355  695520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:04:54.342406  695520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:04:54.342416  695520 certs.go:257] generating profile certs ...
	I1124 09:04:54.342497  695520 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key
	I1124 09:04:54.342513  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt with IP's: []
	I1124 09:04:54.488402  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt ...
	I1124 09:04:54.488432  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt: {Name:mk87cd521056210340bc5798f0387b3f36dc4635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.488613  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key ...
	I1124 09:04:54.488628  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key: {Name:mk03c81f6da2f2b54dfd9fa0e30866e3372921ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.488712  695520 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1
	I1124 09:04:54.488729  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 09:04:54.543616  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 ...
	I1124 09:04:54.543654  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1: {Name:mk2f5faeeb1a8cba2153625fbd7d3a7e54f95aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.543851  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1 ...
	I1124 09:04:54.543873  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1: {Name:mk7ed4cadcafdc2e1a661255372b702ae6719654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.543964  695520 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt.f2d0a0c1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt
	I1124 09:04:54.544040  695520 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key
	I1124 09:04:54.544132  695520 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key
	I1124 09:04:54.544150  695520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt with IP's: []
	I1124 09:04:54.594781  695520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt ...
	I1124 09:04:54.594837  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt: {Name:mk33ff647329a0bdf714fd27ddf109ec15b6d483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.595015  695520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key ...
	I1124 09:04:54.595034  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key: {Name:mk9bf52d92c35c053f63b6073f2a38e1ff2182d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:04:54.595287  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:04:54.595344  695520 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:04:54.595359  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:04:54.595395  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:04:54.595433  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:04:54.595484  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:04:54.595553  695520 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:04:54.596350  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:04:54.616384  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:04:54.633998  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:04:54.651552  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:04:54.669737  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 09:04:54.686876  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:04:54.703726  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:04:54.720840  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:04:54.737534  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:04:54.757717  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:04:54.774715  695520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:04:54.791052  695520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:04:54.802968  695520 ssh_runner.go:195] Run: openssl version
	I1124 09:04:54.808893  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:04:54.816748  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.820220  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.820260  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:04:54.854133  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:04:54.862216  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:04:54.870277  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.873860  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.873906  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:04:54.910146  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:04:54.919148  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:04:54.927753  695520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.931870  695520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.931921  695520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:04:54.972285  695520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:04:54.981223  695520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:04:54.984999  695520 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:04:54.985067  695520 kubeadm.go:401] StartCluster: {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:04:54.985165  695520 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:04:54.985213  695520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:04:55.012874  695520 cri.go:89] found id: ""
	I1124 09:04:55.012940  695520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:04:55.020831  695520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:04:55.029069  695520 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:04:55.029111  695520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:04:55.036334  695520 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:04:55.036348  695520 kubeadm.go:158] found existing configuration files:
	
	I1124 09:04:55.036384  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:04:55.044532  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:04:55.044579  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:04:55.051885  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:04:55.059335  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:04:55.059381  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:04:55.066924  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:04:55.075157  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:04:55.075202  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:04:55.082536  695520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:04:55.090276  695520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:04:55.090333  695520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:04:55.097848  695520 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:04:55.141844  695520 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 09:04:55.142222  695520 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:04:55.176293  695520 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:04:55.176360  695520 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:04:55.176399  695520 kubeadm.go:319] OS: Linux
	I1124 09:04:55.176522  695520 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:04:55.176607  695520 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:04:55.176692  695520 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:04:55.176788  695520 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:04:55.176861  695520 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:04:55.176926  695520 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:04:55.177000  695520 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:04:55.177072  695520 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:04:55.267260  695520 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:04:55.267430  695520 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:04:55.267573  695520 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 09:04:55.406819  695520 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:04:55.408942  695520 out.go:252]   - Generating certificates and keys ...
	I1124 09:04:55.409040  695520 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:04:55.409154  695520 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:04:55.535942  695520 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:04:55.747446  695520 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:04:56.231180  695520 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:04:56.348617  695520 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:04:56.564540  695520 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:04:56.564771  695520 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-128377] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:04:54.234417  696018 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:04:54.252265  696018 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:04:54.256402  696018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:04:54.271173  696018 kubeadm.go:884] updating cluster {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:04:54.271376  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:54.585565  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:54.895614  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:04:55.213448  696018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:04:55.213537  696018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:04:55.248674  696018 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1124 09:04:55.248704  696018 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.5.24-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 09:04:55.248761  696018 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:55.248818  696018 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.248841  696018 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.248860  696018 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.248864  696018 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.248833  696018 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.248841  696018 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.249034  696018 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.250186  696018 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.250215  696018 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:55.250182  696018 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.24-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.250186  696018 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.250253  696018 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.250254  696018 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.250188  696018 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.250648  696018 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.411211  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139"
	I1124 09:04:55.411274  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.432666  696018 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1124 09:04:55.432717  696018 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.432775  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.436380  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810"
	I1124 09:04:55.436448  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.436570  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.438317  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b"
	I1124 09:04:55.438376  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.445544  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc"
	I1124 09:04:55.445608  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.462611  696018 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1124 09:04:55.462672  696018 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.462735  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.466873  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1124 09:04:55.466944  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1124 09:04:55.469707  696018 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1124 09:04:55.469760  696018 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.469761  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.469806  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.476188  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.24-0" and sha "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d"
	I1124 09:04:55.476260  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.476601  696018 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1124 09:04:55.476645  696018 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.476700  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.476760  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.483510  696018 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46"
	I1124 09:04:55.483571  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.493634  696018 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 09:04:55.493674  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.493687  696018 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 09:04:55.493730  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.504559  696018 cache_images.go:118] "registry.k8s.io/etcd:3.5.24-0" needs transfer: "registry.k8s.io/etcd:3.5.24-0" does not exist at hash "8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d" in container runtime
	I1124 09:04:55.504599  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1124 09:04:55.504606  696018 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.504646  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.512866  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.512892  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.512910  696018 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1124 09:04:55.512950  696018 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.512990  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:55.526695  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.526717  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.526785  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.539513  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1124 09:04:55.539663  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:55.546674  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1124 09:04:55.546750  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.546715  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.564076  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.567023  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1124 09:04:55.567049  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1124 09:04:55.567061  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1124 09:04:55.567151  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.598524  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.598552  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1124 09:04:55.598652  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1124 09:04:55.598735  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:04:55.614879  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.5.24-0
	I1124 09:04:55.624975  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:55.625072  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:55.679323  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 09:04:55.684055  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1124 09:04:55.684090  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1124 09:04:55.684124  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.684140  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:04:55.684150  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0
	I1124 09:04:55.684159  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.684160  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1124 09:04:55.684171  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1124 09:04:55.684244  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:55.736024  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 09:04:55.736135  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:55.746073  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.746108  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1124 09:04:55.746157  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:55.746175  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.24-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.24-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.24-0': No such file or directory
	I1124 09:04:55.746191  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 --> /var/lib/minikube/images/etcd_3.5.24-0 (23728640 bytes)
	I1124 09:04:55.746248  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:55.801010  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 09:04:55.801049  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 09:04:55.808405  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1124 09:04:55.808441  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1124 09:04:55.880897  696018 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:55.880969  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1124 09:04:56.015999  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 09:04:56.068815  696018 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:56.068912  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.24-0
	I1124 09:04:56.453297  696018 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1124 09:04:56.453371  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:57.304727  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.24-0: (1.235782073s)
	I1124 09:04:57.304763  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 from cache
	I1124 09:04:57.304794  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:57.304806  696018 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 09:04:57.304847  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1124 09:04:57.304858  696018 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:57.304920  696018 ssh_runner.go:195] Run: which crictl
	I1124 09:04:56.768431  695520 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:04:56.768677  695520 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-128377] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 09:04:57.042517  695520 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:04:57.135211  695520 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:04:57.487492  695520 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:04:57.487607  695520 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:04:57.647815  695520 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:04:57.788032  695520 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:04:58.007063  695520 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:04:58.262043  695520 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:04:58.262616  695520 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:04:58.265868  695520 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:04:55.921561  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:04:55.921607  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:04:58.266858  695520 out.go:252]   - Booting up control plane ...
	I1124 09:04:58.266989  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:04:58.267065  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:04:58.267746  695520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:04:58.282824  695520 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:04:58.283699  695520 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:04:58.283773  695520 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:04:58.419897  695520 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 09:04:58.797650  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.492766226s)
	I1124 09:04:58.797672  696018 ssh_runner.go:235] Completed: which crictl: (1.492732478s)
	I1124 09:04:58.797693  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1124 09:04:58.797722  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:58.797742  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:04:58.797763  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1124 09:04:59.494097  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1124 09:04:59.494141  696018 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:59.494193  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1124 09:04:59.494314  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:00.636087  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.141861944s)
	I1124 09:05:00.636150  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1124 09:05:00.636183  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:05:00.636184  696018 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.141835433s)
	I1124 09:05:00.636272  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1124 09:05:00.636277  696018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:01.829551  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.193240306s)
	I1124 09:05:01.829586  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1124 09:05:01.829561  696018 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.193259021s)
	I1124 09:05:01.829618  696018 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:05:01.829656  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1124 09:05:01.829661  696018 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 09:05:01.829741  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:02.922442  695520 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502768 seconds
	I1124 09:05:02.922650  695520 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:05:02.938003  695520 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:05:03.487168  695520 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:05:03.487569  695520 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-128377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:05:03.997647  695520 kubeadm.go:319] [bootstrap-token] Using token: jnao2u.ovlrxqviyhx4po41
	I1124 09:05:03.999063  695520 out.go:252]   - Configuring RBAC rules ...
	I1124 09:05:03.999223  695520 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:05:04.003823  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:05:04.010298  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:05:04.012923  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:05:04.015535  695520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:05:04.019043  695520 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:05:04.029389  695520 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:05:04.209549  695520 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:05:04.407855  695520 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:05:04.408750  695520 kubeadm.go:319] 
	I1124 09:05:04.408814  695520 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:05:04.408821  695520 kubeadm.go:319] 
	I1124 09:05:04.408930  695520 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:05:04.408949  695520 kubeadm.go:319] 
	I1124 09:05:04.408983  695520 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:05:04.409060  695520 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:05:04.409107  695520 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:05:04.409122  695520 kubeadm.go:319] 
	I1124 09:05:04.409207  695520 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:05:04.409227  695520 kubeadm.go:319] 
	I1124 09:05:04.409283  695520 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:05:04.409289  695520 kubeadm.go:319] 
	I1124 09:05:04.409340  695520 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:05:04.409401  695520 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:05:04.409519  695520 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:05:04.409531  695520 kubeadm.go:319] 
	I1124 09:05:04.409633  695520 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:05:04.409739  695520 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:05:04.409748  695520 kubeadm.go:319] 
	I1124 09:05:04.409856  695520 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jnao2u.ovlrxqviyhx4po41 \
	I1124 09:05:04.409989  695520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:05:04.410028  695520 kubeadm.go:319] 	--control-plane 
	I1124 09:05:04.410043  695520 kubeadm.go:319] 
	I1124 09:05:04.410157  695520 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:05:04.410168  695520 kubeadm.go:319] 
	I1124 09:05:04.410253  695520 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jnao2u.ovlrxqviyhx4po41 \
	I1124 09:05:04.410416  695520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:05:04.412734  695520 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:05:04.412863  695520 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:05:04.412887  695520 cni.go:84] Creating CNI manager for ""
	I1124 09:05:04.412895  695520 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:04.414780  695520 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:05:00.922661  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:00.922710  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:04.415630  695520 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:05:04.420099  695520 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 09:05:04.420115  695520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:05:04.433073  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:05:05.091722  695520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:05:05.091870  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-128377 minikube.k8s.io/updated_at=2025_11_24T09_05_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=old-k8s-version-128377 minikube.k8s.io/primary=true
	I1124 09:05:05.092348  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:05.102498  695520 ops.go:34] apiserver oom_adj: -16
	I1124 09:05:05.174868  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:05.675283  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:06.175310  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:02.915588  696018 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.085815853s)
	I1124 09:05:02.915634  696018 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.085954166s)
	I1124 09:05:02.915671  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1124 09:05:02.915639  696018 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 09:05:02.915716  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 09:05:02.976753  696018 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:02.976825  696018 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 09:05:03.348632  696018 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 09:05:03.348678  696018 cache_images.go:125] Successfully loaded all cached images
	I1124 09:05:03.348686  696018 cache_images.go:94] duration metric: took 8.099965824s to LoadCachedImages
	I1124 09:05:03.348703  696018 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1124 09:05:03.348825  696018 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:05:03.348894  696018 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:05:03.376137  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:05:03.376168  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:03.376188  696018 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:05:03.376210  696018 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820576 NodeName:no-preload-820576 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:05:03.376350  696018 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-820576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:05:03.376422  696018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:05:03.385368  696018 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1124 09:05:03.385424  696018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:05:03.394095  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1124 09:05:03.394128  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:05:03.394180  696018 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1124 09:05:03.394191  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1124 09:05:03.394205  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1124 09:05:03.394225  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:03.399712  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1124 09:05:03.399743  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1124 09:05:03.399797  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1124 09:05:03.399839  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1124 09:05:03.414063  696018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1124 09:05:03.448582  696018 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1124 09:05:03.448623  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1124 09:05:03.941988  696018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:05:03.950659  696018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1124 09:05:03.964545  696018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:05:03.980698  696018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1124 09:05:03.994370  696018 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:05:03.999682  696018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:05:04.011951  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:04.105068  696018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:04.129581  696018 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576 for IP: 192.168.85.2
	I1124 09:05:04.129609  696018 certs.go:195] generating shared ca certs ...
	I1124 09:05:04.129631  696018 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.129796  696018 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:05:04.129861  696018 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:05:04.129876  696018 certs.go:257] generating profile certs ...
	I1124 09:05:04.129944  696018 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key
	I1124 09:05:04.129964  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt with IP's: []
	I1124 09:05:04.178331  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt ...
	I1124 09:05:04.178368  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt: {Name:mk7a6d48f62cb24db3b80fa6902658a2fab15360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.178586  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key ...
	I1124 09:05:04.178605  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key: {Name:mke761c4ec29e36beccc716dc800bc8fd841e3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.178724  696018 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632
	I1124 09:05:04.178748  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 09:05:04.417670  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 ...
	I1124 09:05:04.417694  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632: {Name:mk59a2d57d772e51aeeeb2a9a4dca760203e6d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.417874  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632 ...
	I1124 09:05:04.417897  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632: {Name:mkdb0be38fd80ef77438b49aa69b9308c6d28ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.418023  696018 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt.402ae632 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt
	I1124 09:05:04.418147  696018 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632 -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key
	I1124 09:05:04.418202  696018 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key
	I1124 09:05:04.418217  696018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt with IP's: []
	I1124 09:05:04.604435  696018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt ...
	I1124 09:05:04.604497  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt: {Name:mk5719f2112f16d39272baf4588ce9b65d33d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.604728  696018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key ...
	I1124 09:05:04.604746  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key: {Name:mk56d8ccc21a879d6506ee3380097e85fb4b4f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:04.605022  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:05:04.605073  696018 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:05:04.605084  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:05:04.605120  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:05:04.605160  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:05:04.605195  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:05:04.605369  696018 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:05:04.606568  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:05:04.626964  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:05:04.644973  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:05:04.663649  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:05:04.681360  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:05:04.699027  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:05:04.716381  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:05:04.734298  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:05:04.752033  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:05:04.771861  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:05:04.789824  696018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:05:04.808313  696018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:05:04.826085  696018 ssh_runner.go:195] Run: openssl version
	I1124 09:05:04.834356  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:05:04.843772  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.848660  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.848725  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:05:04.887168  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:05:04.897113  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:05:04.907480  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.911694  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.911746  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:05:04.951326  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:05:04.961765  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:05:04.972056  696018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:05:04.976497  696018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:05:04.976554  696018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:05:05.017003  696018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:05:05.027292  696018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:05:05.031547  696018 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:05:05.031616  696018 kubeadm.go:401] StartCluster: {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:05:05.031711  696018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:05:05.031765  696018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:05:05.062044  696018 cri.go:89] found id: ""
	I1124 09:05:05.062126  696018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:05:05.071887  696018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:05:05.082157  696018 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:05:05.082217  696018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:05:05.091225  696018 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:05:05.091248  696018 kubeadm.go:158] found existing configuration files:
	
	I1124 09:05:05.091296  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:05:05.100600  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:05:05.100657  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:05:05.110555  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:05:05.119216  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:05:05.119288  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:05:05.127876  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:05:05.136154  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:05:05.136205  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:05:05.145077  696018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:05:05.154290  696018 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:05:05.154338  696018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:05:05.162702  696018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:05:05.200662  696018 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1124 09:05:05.200757  696018 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:05:05.269623  696018 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:05:05.269714  696018 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:05:05.269770  696018 kubeadm.go:319] OS: Linux
	I1124 09:05:05.269842  696018 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:05:05.269920  696018 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:05:05.270003  696018 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:05:05.270084  696018 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:05:05.270155  696018 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:05:05.270223  696018 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:05:05.270303  696018 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:05:05.270377  696018 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:05:05.332844  696018 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:05:05.332992  696018 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:05:05.333150  696018 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:05:06.734694  696018 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 09:05:06.738817  696018 out.go:252]   - Generating certificates and keys ...
	I1124 09:05:06.738929  696018 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:05:06.739072  696018 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:05:06.832143  696018 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:05:06.955015  696018 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:05:07.027143  696018 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:05:07.115762  696018 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:05:07.265716  696018 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:05:07.265857  696018 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-820576] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 09:05:07.364684  696018 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:05:07.364865  696018 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-820576] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 09:05:07.523315  696018 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:05:07.590589  696018 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:05:07.746307  696018 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:05:07.746426  696018 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:05:07.869677  696018 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:05:07.978931  696018 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:05:08.053720  696018 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:05:08.085227  696018 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:05:08.160011  696018 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:05:08.160849  696018 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:05:08.165435  696018 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 09:05:05.923694  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:05.923742  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:06.675415  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:07.175277  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:07.676031  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.174962  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.675088  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:09.175102  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:09.675096  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:10.175027  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:10.675655  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:11.175703  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:08.166975  696018 out.go:252]   - Booting up control plane ...
	I1124 09:05:08.167117  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:05:08.167189  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:05:08.167816  696018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:05:08.183769  696018 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:05:08.183936  696018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:05:08.191856  696018 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:05:08.191990  696018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:05:08.192031  696018 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:05:08.308076  696018 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:05:08.308205  696018 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:05:09.309901  696018 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001908715s
	I1124 09:05:09.316051  696018 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:05:09.316157  696018 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 09:05:09.316247  696018 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:05:09.316315  696018 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:05:10.320869  696018 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004644301s
	I1124 09:05:10.832866  696018 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.516703459s
	I1124 09:05:12.317179  696018 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001080604s
	I1124 09:05:12.331544  696018 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:05:12.339378  696018 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:05:12.347526  696018 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:05:12.347705  696018 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-820576 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:05:12.354657  696018 kubeadm.go:319] [bootstrap-token] Using token: awoygq.wealvtzys3befsou
	I1124 09:05:12.355757  696018 out.go:252]   - Configuring RBAC rules ...
	I1124 09:05:12.355888  696018 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:05:12.359613  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:05:12.364202  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:05:12.366491  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:05:12.369449  696018 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:05:12.371508  696018 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:05:12.722783  696018 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:05:13.137535  696018 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:05:13.723038  696018 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:05:13.724197  696018 kubeadm.go:319] 
	I1124 09:05:13.724302  696018 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:05:13.724317  696018 kubeadm.go:319] 
	I1124 09:05:13.724412  696018 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:05:13.724424  696018 kubeadm.go:319] 
	I1124 09:05:13.724520  696018 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:05:13.724630  696018 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:05:13.724716  696018 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:05:13.724730  696018 kubeadm.go:319] 
	I1124 09:05:13.724818  696018 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:05:13.724831  696018 kubeadm.go:319] 
	I1124 09:05:13.724897  696018 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:05:13.724906  696018 kubeadm.go:319] 
	I1124 09:05:13.724990  696018 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:05:13.725105  696018 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:05:13.725212  696018 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:05:13.725221  696018 kubeadm.go:319] 
	I1124 09:05:13.725338  696018 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:05:13.725493  696018 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:05:13.725510  696018 kubeadm.go:319] 
	I1124 09:05:13.725601  696018 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token awoygq.wealvtzys3befsou \
	I1124 09:05:13.725765  696018 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:05:13.725804  696018 kubeadm.go:319] 	--control-plane 
	I1124 09:05:13.725816  696018 kubeadm.go:319] 
	I1124 09:05:13.725934  696018 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:05:13.725944  696018 kubeadm.go:319] 
	I1124 09:05:13.726041  696018 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token awoygq.wealvtzys3befsou \
	I1124 09:05:13.726243  696018 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:05:13.728504  696018 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:05:13.728661  696018 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:05:13.728704  696018 cni.go:84] Creating CNI manager for ""
	I1124 09:05:13.728716  696018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:05:13.730529  696018 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:05:10.924882  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:10.924923  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.109506  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:47578->192.168.76.2:8443: read: connection reset by peer
	I1124 09:05:11.421112  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.421646  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:11.920950  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:11.921496  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:12.421219  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:12.421692  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:12.921430  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:12.921911  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:13.420431  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:13.420926  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:13.920542  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:13.921060  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:14.420434  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:14.420859  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:11.675776  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:12.175192  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:12.675267  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.175941  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.675281  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.175267  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.675185  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.175391  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.675966  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.175887  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.675144  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.175281  695520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.260591  695520 kubeadm.go:1114] duration metric: took 12.168846115s to wait for elevateKubeSystemPrivileges
	I1124 09:05:17.260625  695520 kubeadm.go:403] duration metric: took 22.275566194s to StartCluster
	I1124 09:05:17.260655  695520 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:17.260738  695520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:05:17.261860  695520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:17.262121  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:05:17.262124  695520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:05:17.262197  695520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:05:17.262308  695520 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-128377"
	I1124 09:05:17.262334  695520 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-128377"
	I1124 09:05:17.262358  695520 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:05:17.262376  695520 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:05:17.262351  695520 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-128377"
	I1124 09:05:17.262443  695520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-128377"
	I1124 09:05:17.262844  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.263075  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.263365  695520 out.go:179] * Verifying Kubernetes components...
	I1124 09:05:17.264408  695520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:17.287510  695520 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-128377"
	I1124 09:05:17.287559  695520 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:05:17.287978  695520 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:05:17.288769  695520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:13.732137  696018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:05:13.737711  696018 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1124 09:05:13.737726  696018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:05:13.752118  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:05:13.951744  696018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:05:13.951795  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:13.951847  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-820576 minikube.k8s.io/updated_at=2025_11_24T09_05_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=no-preload-820576 minikube.k8s.io/primary=true
	I1124 09:05:13.962047  696018 ops.go:34] apiserver oom_adj: -16
	I1124 09:05:14.022754  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:14.523671  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.023231  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:15.523083  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.023230  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:16.523666  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.022940  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.523444  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:17.290230  695520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:17.290253  695520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:05:17.290314  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:05:17.317679  695520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:17.317704  695520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:05:17.317768  695520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:05:17.319048  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:05:17.343853  695520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:05:17.366525  695520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:05:17.411998  695520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:17.447003  695520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:17.463082  695520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:17.632983  695520 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 09:05:17.634312  695520 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:05:17.888856  695520 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:05:18.022851  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:18.523601  696018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:05:18.589169  696018 kubeadm.go:1114] duration metric: took 4.637423043s to wait for elevateKubeSystemPrivileges
	I1124 09:05:18.589209  696018 kubeadm.go:403] duration metric: took 13.557597169s to StartCluster
	I1124 09:05:18.589237  696018 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:18.589321  696018 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:05:18.590747  696018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:05:18.590988  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:05:18.591000  696018 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:05:18.591095  696018 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:05:18.591206  696018 addons.go:70] Setting storage-provisioner=true in profile "no-preload-820576"
	I1124 09:05:18.591219  696018 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:05:18.591236  696018 addons.go:239] Setting addon storage-provisioner=true in "no-preload-820576"
	I1124 09:05:18.591251  696018 addons.go:70] Setting default-storageclass=true in profile "no-preload-820576"
	I1124 09:05:18.591275  696018 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:05:18.591283  696018 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820576"
	I1124 09:05:18.591664  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.591855  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.592299  696018 out.go:179] * Verifying Kubernetes components...
	I1124 09:05:18.593599  696018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:05:18.615163  696018 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:05:18.615451  696018 addons.go:239] Setting addon default-storageclass=true in "no-preload-820576"
	I1124 09:05:18.615530  696018 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:05:18.615851  696018 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:05:18.616223  696018 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:18.616245  696018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:05:18.616301  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:05:18.646443  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:05:18.647885  696018 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:18.647963  696018 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:05:18.648059  696018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:05:18.675529  696018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:05:18.685797  696018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:05:18.752704  696018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:05:18.775922  696018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:05:18.800792  696018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:05:18.878758  696018 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 09:05:18.880873  696018 node_ready.go:35] waiting up to 6m0s for node "no-preload-820576" to be "Ready" ...
	I1124 09:05:19.096304  696018 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 09:05:14.921188  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:14.921633  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:15.421327  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:15.421818  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:15.920573  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:15.921034  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:16.421282  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:16.421841  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:16.921386  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:16.921942  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.420551  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:17.421007  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.920666  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:17.921181  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:18.420539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:18.421011  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:18.920611  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:18.921079  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:19.420539  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:19.421004  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:17.889849  695520 addons.go:530] duration metric: took 627.656763ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:05:18.137738  695520 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-128377" context rescaled to 1 replicas
	W1124 09:05:19.637948  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	I1124 09:05:19.097398  696018 addons.go:530] duration metric: took 506.310963ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:05:19.383938  696018 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-820576" context rescaled to 1 replicas
	W1124 09:05:20.884989  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:19.920806  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:19.921207  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:20.420831  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:20.421312  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:20.920613  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:20.921185  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:21.420832  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:21.421240  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:21.920531  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:05:21.921019  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:05:22.420552  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:05:21.638057  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:23.638668  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:26.137883  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:23.383937  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:25.384443  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:27.421276  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:27.421318  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:05:28.138098  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:30.638120  695520 node_ready.go:57] node "old-k8s-version-128377" has "Ready":"False" status (will retry)
	W1124 09:05:27.884284  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:29.884474  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	W1124 09:05:32.384199  696018 node_ready.go:57] node "no-preload-820576" has "Ready":"False" status (will retry)
	I1124 09:05:31.637332  695520 node_ready.go:49] node "old-k8s-version-128377" is "Ready"
	I1124 09:05:31.637368  695520 node_ready.go:38] duration metric: took 14.003009675s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:05:31.637385  695520 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:05:31.637443  695520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:05:31.650126  695520 api_server.go:72] duration metric: took 14.387953281s to wait for apiserver process to appear ...
	I1124 09:05:31.650156  695520 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:05:31.650179  695520 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:05:31.654078  695520 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:05:31.655253  695520 api_server.go:141] control plane version: v1.28.0
	I1124 09:05:31.655280  695520 api_server.go:131] duration metric: took 5.117021ms to wait for apiserver health ...
	I1124 09:05:31.655289  695520 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:05:31.658830  695520 system_pods.go:59] 8 kube-system pods found
	I1124 09:05:31.658868  695520 system_pods.go:61] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.658877  695520 system_pods.go:61] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.658889  695520 system_pods.go:61] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.658895  695520 system_pods.go:61] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.658906  695520 system_pods.go:61] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.658910  695520 system_pods.go:61] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.658916  695520 system_pods.go:61] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.658921  695520 system_pods.go:61] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.658927  695520 system_pods.go:74] duration metric: took 3.632262ms to wait for pod list to return data ...
	I1124 09:05:31.658936  695520 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:05:31.660923  695520 default_sa.go:45] found service account: "default"
	I1124 09:05:31.660942  695520 default_sa.go:55] duration metric: took 2.000088ms for default service account to be created ...
	I1124 09:05:31.660950  695520 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:05:31.664223  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:31.664263  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.664272  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.664280  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.664284  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.664287  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.664291  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.664294  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.664300  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.664333  695520 retry.go:31] will retry after 195.108791ms: missing components: kube-dns
	I1124 09:05:31.863438  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:31.863494  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:31.863505  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:31.863515  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:31.863520  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:31.863525  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:31.863528  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:31.863540  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:31.863557  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:31.863579  695520 retry.go:31] will retry after 244.252087ms: missing components: kube-dns
	I1124 09:05:32.111547  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:32.111586  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:32.111595  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:32.111603  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:32.111608  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:32.111614  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:32.111628  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:32.111634  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:32.111641  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:32.111660  695520 retry.go:31] will retry after 471.342676ms: missing components: kube-dns
	I1124 09:05:32.587354  695520 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:32.587384  695520 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Running
	I1124 09:05:32.587389  695520 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running
	I1124 09:05:32.587393  695520 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running
	I1124 09:05:32.587397  695520 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running
	I1124 09:05:32.587402  695520 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running
	I1124 09:05:32.587405  695520 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running
	I1124 09:05:32.587408  695520 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running
	I1124 09:05:32.587411  695520 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Running
	I1124 09:05:32.587420  695520 system_pods.go:126] duration metric: took 926.463548ms to wait for k8s-apps to be running ...
	I1124 09:05:32.587428  695520 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:05:32.587503  695520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:32.602305  695520 system_svc.go:56] duration metric: took 14.864147ms WaitForService to wait for kubelet
	I1124 09:05:32.602336  695520 kubeadm.go:587] duration metric: took 15.340181249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:05:32.602385  695520 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:05:32.605212  695520 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:05:32.605242  695520 node_conditions.go:123] node cpu capacity is 8
	I1124 09:05:32.605271  695520 node_conditions.go:105] duration metric: took 2.87532ms to run NodePressure ...
	I1124 09:05:32.605293  695520 start.go:242] waiting for startup goroutines ...
	I1124 09:05:32.605308  695520 start.go:247] waiting for cluster config update ...
	I1124 09:05:32.605327  695520 start.go:256] writing updated cluster config ...
	I1124 09:05:32.605690  695520 ssh_runner.go:195] Run: rm -f paused
	I1124 09:05:32.610319  695520 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:32.614557  695520 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.619322  695520 pod_ready.go:94] pod "coredns-5dd5756b68-vxxnm" is "Ready"
	I1124 09:05:32.619349  695520 pod_ready.go:86] duration metric: took 4.765973ms for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.622417  695520 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.626873  695520 pod_ready.go:94] pod "etcd-old-k8s-version-128377" is "Ready"
	I1124 09:05:32.626900  695520 pod_ready.go:86] duration metric: took 4.45394ms for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.629800  695520 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.634310  695520 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-128377" is "Ready"
	I1124 09:05:32.634338  695520 pod_ready.go:86] duration metric: took 4.514426ms for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:32.637382  695520 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.015375  695520 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-128377" is "Ready"
	I1124 09:05:33.015406  695520 pod_ready.go:86] duration metric: took 378.000797ms for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.215146  695520 pod_ready.go:83] waiting for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.614362  695520 pod_ready.go:94] pod "kube-proxy-fpbs2" is "Ready"
	I1124 09:05:33.614392  695520 pod_ready.go:86] duration metric: took 399.215049ms for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:33.815166  695520 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.214969  695520 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-128377" is "Ready"
	I1124 09:05:34.214999  695520 pod_ready.go:86] duration metric: took 399.806564ms for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.215011  695520 pod_ready.go:40] duration metric: took 1.604660669s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:34.261989  695520 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:05:34.263612  695520 out.go:203] 
	W1124 09:05:34.264723  695520 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:05:34.265770  695520 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:05:34.267170  695520 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-128377" cluster and "default" namespace by default
	I1124 09:05:32.422898  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:05:32.423021  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:05:32.423106  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:05:32.453902  685562 cri.go:89] found id: "1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:05:32.453922  685562 cri.go:89] found id: "4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	I1124 09:05:32.453927  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:05:32.453929  685562 cri.go:89] found id: ""
	I1124 09:05:32.453937  685562 logs.go:282] 3 containers: [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:05:32.454000  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.458469  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.462439  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.466262  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:05:32.466335  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:05:32.496086  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:05:32.496112  685562 cri.go:89] found id: ""
	I1124 09:05:32.496122  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:05:32.496186  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.500443  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:05:32.500532  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:05:32.528567  685562 cri.go:89] found id: ""
	I1124 09:05:32.528602  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.528610  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:05:32.528617  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:05:32.528677  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:05:32.557355  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:05:32.557375  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:05:32.557379  685562 cri.go:89] found id: ""
	I1124 09:05:32.557388  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:05:32.557445  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.561666  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.565691  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:05:32.565776  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:05:32.594818  685562 cri.go:89] found id: ""
	I1124 09:05:32.594841  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.594848  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:05:32.594855  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:05:32.594900  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:05:32.625049  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:05:32.625068  685562 cri.go:89] found id: "87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0"
	I1124 09:05:32.625073  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:05:32.625078  685562 cri.go:89] found id: ""
	I1124 09:05:32.625087  685562 logs.go:282] 3 containers: [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d 87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:05:32.625142  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.630042  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.634965  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:05:32.639315  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:05:32.639376  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:05:32.669355  685562 cri.go:89] found id: ""
	I1124 09:05:32.669384  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.669392  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:05:32.669398  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:05:32.669449  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:05:32.697559  685562 cri.go:89] found id: ""
	I1124 09:05:32.697586  685562 logs.go:282] 0 containers: []
	W1124 09:05:32.697596  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:05:32.697610  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:05:32.697645  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:05:32.736120  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:05:32.736153  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:05:32.768484  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:05:32.768526  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:05:32.836058  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:05:32.836100  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:05:32.853541  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:05:32.853613  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 09:05:33.384739  696018 node_ready.go:49] node "no-preload-820576" is "Ready"
	I1124 09:05:33.384778  696018 node_ready.go:38] duration metric: took 14.503869435s for node "no-preload-820576" to be "Ready" ...
	I1124 09:05:33.384797  696018 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:05:33.384861  696018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:05:33.401268  696018 api_server.go:72] duration metric: took 14.81022929s to wait for apiserver process to appear ...
	I1124 09:05:33.401299  696018 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:05:33.401324  696018 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 09:05:33.406015  696018 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 09:05:33.407175  696018 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:05:33.407215  696018 api_server.go:131] duration metric: took 5.908148ms to wait for apiserver health ...
	I1124 09:05:33.407226  696018 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:05:33.410293  696018 system_pods.go:59] 8 kube-system pods found
	I1124 09:05:33.410331  696018 system_pods.go:61] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.410338  696018 system_pods.go:61] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.410346  696018 system_pods.go:61] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.410352  696018 system_pods.go:61] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.410360  696018 system_pods.go:61] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.410365  696018 system_pods.go:61] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.410369  696018 system_pods.go:61] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.410382  696018 system_pods.go:61] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.410391  696018 system_pods.go:74] duration metric: took 3.156993ms to wait for pod list to return data ...
	I1124 09:05:33.410403  696018 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:05:33.413158  696018 default_sa.go:45] found service account: "default"
	I1124 09:05:33.413182  696018 default_sa.go:55] duration metric: took 2.772178ms for default service account to be created ...
	I1124 09:05:33.413192  696018 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:05:33.416818  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.416849  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.416856  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.416863  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.416868  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.416874  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.416879  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.416884  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.416891  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.416935  696018 retry.go:31] will retry after 275.944352ms: missing components: kube-dns
	I1124 09:05:33.697203  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.697247  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.697259  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.697269  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.697274  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.697285  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.697290  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.697297  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.697304  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.697327  696018 retry.go:31] will retry after 278.68714ms: missing components: kube-dns
	I1124 09:05:33.979933  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:33.979971  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:05:33.979977  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:33.979984  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:33.979987  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:33.979991  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:33.979994  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:33.979998  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:33.980003  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:05:33.980020  696018 retry.go:31] will retry after 448.083964ms: missing components: kube-dns
	I1124 09:05:34.432301  696018 system_pods.go:86] 8 kube-system pods found
	I1124 09:05:34.432341  696018 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Running
	I1124 09:05:34.432350  696018 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running
	I1124 09:05:34.432355  696018 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running
	I1124 09:05:34.432362  696018 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running
	I1124 09:05:34.432369  696018 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running
	I1124 09:05:34.432374  696018 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running
	I1124 09:05:34.432379  696018 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running
	I1124 09:05:34.432384  696018 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Running
	I1124 09:05:34.432395  696018 system_pods.go:126] duration metric: took 1.019195458s to wait for k8s-apps to be running ...
	I1124 09:05:34.432410  696018 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:05:34.432534  696018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:05:34.451401  696018 system_svc.go:56] duration metric: took 18.978773ms WaitForService to wait for kubelet
	I1124 09:05:34.451444  696018 kubeadm.go:587] duration metric: took 15.860405681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:05:34.451483  696018 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:05:34.454386  696018 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:05:34.454410  696018 node_conditions.go:123] node cpu capacity is 8
	I1124 09:05:34.454427  696018 node_conditions.go:105] duration metric: took 2.938205ms to run NodePressure ...
	I1124 09:05:34.454440  696018 start.go:242] waiting for startup goroutines ...
	I1124 09:05:34.454450  696018 start.go:247] waiting for cluster config update ...
	I1124 09:05:34.454478  696018 start.go:256] writing updated cluster config ...
	I1124 09:05:34.454771  696018 ssh_runner.go:195] Run: rm -f paused
	I1124 09:05:34.459160  696018 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:34.462567  696018 pod_ready.go:83] waiting for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.466303  696018 pod_ready.go:94] pod "coredns-7d764666f9-b6dpn" is "Ready"
	I1124 09:05:34.466324  696018 pod_ready.go:86] duration metric: took 3.738029ms for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.468156  696018 pod_ready.go:83] waiting for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.471750  696018 pod_ready.go:94] pod "etcd-no-preload-820576" is "Ready"
	I1124 09:05:34.471775  696018 pod_ready.go:86] duration metric: took 3.597676ms for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.473507  696018 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.477092  696018 pod_ready.go:94] pod "kube-apiserver-no-preload-820576" is "Ready"
	I1124 09:05:34.477115  696018 pod_ready.go:86] duration metric: took 3.588223ms for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.478724  696018 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:34.862953  696018 pod_ready.go:94] pod "kube-controller-manager-no-preload-820576" is "Ready"
	I1124 09:05:34.862977  696018 pod_ready.go:86] duration metric: took 384.235741ms for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.063039  696018 pod_ready.go:83] waiting for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.463183  696018 pod_ready.go:94] pod "kube-proxy-vz24l" is "Ready"
	I1124 09:05:35.463217  696018 pod_ready.go:86] duration metric: took 400.149042ms for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:35.664151  696018 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:36.063590  696018 pod_ready.go:94] pod "kube-scheduler-no-preload-820576" is "Ready"
	I1124 09:05:36.063619  696018 pod_ready.go:86] duration metric: took 399.441074ms for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:05:36.063632  696018 pod_ready.go:40] duration metric: took 1.604443296s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:05:36.110852  696018 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:05:36.112796  696018 out.go:179] * Done! kubectl is now configured to use "no-preload-820576" cluster and "default" namespace by default
	I1124 09:05:43.195573  685562 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.341935277s)
	W1124 09:05:43.195644  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:44544->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:44544->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1124 09:05:43.195660  685562 logs.go:123] Gathering logs for kube-apiserver [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365] ...
	I1124 09:05:43.195679  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:05:43.229092  685562 logs.go:123] Gathering logs for kube-apiserver [4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680] ...
	I1124 09:05:43.229122  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	W1124 09:05:43.256709  685562 logs.go:130] failed kube-apiserver [4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680": Process exited with status 1
	stdout:
	
	stderr:
	E1124 09:05:43.254237    2218 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found" containerID="4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	time="2025-11-24T09:05:43Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found"
	 output: 
	** stderr ** 
	E1124 09:05:43.254237    2218 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found" containerID="4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680"
	time="2025-11-24T09:05:43Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"4d75c0e16a149ca1a7ec4e96d68718e51659aa9619085a44b28b38f4a7716680\": not found"
	
	** /stderr **
	I1124 09:05:43.256732  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:05:43.256745  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:05:43.296899  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:05:43.296933  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:05:43.327780  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:05:43.327805  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:05:43.363107  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:05:43.363150  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:05:43.395896  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:05:43.395929  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:05:43.423650  685562 logs.go:123] Gathering logs for kube-controller-manager [87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0] ...
	I1124 09:05:43.423680  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 87fb36f1d5c6bc7114bcd8099f1af4b27cea41c648c6e97f4789f111172ccbb0"
	I1124 09:05:43.453581  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:05:43.453608  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ba53f9b2ebdff       56cc512116c8f       9 seconds ago       Running             busybox                   0                   831740f12ed9d       busybox                                     default
	1ccff83dea1f3       aa5e3ebc0dfed       15 seconds ago      Running             coredns                   0                   e0449c7605999       coredns-7d764666f9-b6dpn                    kube-system
	372566a488aa6       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   0d4413669c9e7       storage-provisioner                         kube-system
	f013ec6444310       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   fe354f65119b6       kindnet-kvm52                               kube-system
	d11c1a1929cbd       8a4ded35a3eb1       29 seconds ago      Running             kube-proxy                0                   57880ad4cbc75       kube-proxy-vz24l                            kube-system
	3792977e1319f       7bb6219ddab95       39 seconds ago      Running             kube-scheduler            0                   e565b2950cf64       kube-scheduler-no-preload-820576            kube-system
	1cc365be5ed1f       45f3cc72d235f       39 seconds ago      Running             kube-controller-manager   0                   cb2692f06f53c       kube-controller-manager-no-preload-820576   kube-system
	942b50869b3b6       aa9d02839d8de       39 seconds ago      Running             kube-apiserver            0                   e9610922053aa       kube-apiserver-no-preload-820576            kube-system
	0d5c89e98d645       a3e246e9556e9       39 seconds ago      Running             etcd                      0                   169ddc6ab9603       etcd-no-preload-820576                      kube-system
	
	
	==> containerd <==
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.453159820Z" level=info msg="connecting to shim 372566a488aa6257b59eba829cf1e66299ccffe9066320bc512378d4a8f37fc3" address="unix:///run/containerd/s/328d596d67a9c8178c77086cf6bfbb902ebec5e36ed37603d7ba9a85ce28ed2c" protocol=ttrpc version=3
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.458836377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b6dpn,Uid:c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0449c7605999fe2d4dcfd63696b4c675d2ebc7f7eb8c41128d3193b899aee4d\""
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.464455615Z" level=info msg="CreateContainer within sandbox \"e0449c7605999fe2d4dcfd63696b4c675d2ebc7f7eb8c41128d3193b899aee4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.471239221Z" level=info msg="Container 1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.477263883Z" level=info msg="CreateContainer within sandbox \"e0449c7605999fe2d4dcfd63696b4c675d2ebc7f7eb8c41128d3193b899aee4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5\""
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.477734207Z" level=info msg="StartContainer for \"1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5\""
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.478601790Z" level=info msg="connecting to shim 1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5" address="unix:///run/containerd/s/a82eab7c8d1b4c38df30ab62991838299020d1b0af8a8d1b36f581eae59ef54a" protocol=ttrpc version=3
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.510256932Z" level=info msg="StartContainer for \"372566a488aa6257b59eba829cf1e66299ccffe9066320bc512378d4a8f37fc3\" returns successfully"
	Nov 24 09:05:33 no-preload-820576 containerd[658]: time="2025-11-24T09:05:33.531678403Z" level=info msg="StartContainer for \"1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5\" returns successfully"
	Nov 24 09:05:36 no-preload-820576 containerd[658]: time="2025-11-24T09:05:36.586122875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ed19b18b-e761-4aff-8676-38be0169fca8,Namespace:default,Attempt:0,}"
	Nov 24 09:05:36 no-preload-820576 containerd[658]: time="2025-11-24T09:05:36.625353770Z" level=info msg="connecting to shim 831740f12ed9de73f3f54c86d73b7fad71866782ed9656618d60457d1203d284" address="unix:///run/containerd/s/6f2f0b70df621171749ff830e8c830132481fed0cd60e69bb1fa1cb83a2a46e2" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 09:05:36 no-preload-820576 containerd[658]: time="2025-11-24T09:05:36.692941527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ed19b18b-e761-4aff-8676-38be0169fca8,Namespace:default,Attempt:0,} returns sandbox id \"831740f12ed9de73f3f54c86d73b7fad71866782ed9656618d60457d1203d284\""
	Nov 24 09:05:36 no-preload-820576 containerd[658]: time="2025-11-24T09:05:36.695009096Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.908578564Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.909174070Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.910365584Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.911989078Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.912276385Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.217226885s"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.912311145Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.916279483Z" level=info msg="CreateContainer within sandbox \"831740f12ed9de73f3f54c86d73b7fad71866782ed9656618d60457d1203d284\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.922931578Z" level=info msg="Container ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.928338641Z" level=info msg="CreateContainer within sandbox \"831740f12ed9de73f3f54c86d73b7fad71866782ed9656618d60457d1203d284\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927\""
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.928901777Z" level=info msg="StartContainer for \"ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927\""
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.929746506Z" level=info msg="connecting to shim ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927" address="unix:///run/containerd/s/6f2f0b70df621171749ff830e8c830132481fed0cd60e69bb1fa1cb83a2a46e2" protocol=ttrpc version=3
	Nov 24 09:05:38 no-preload-820576 containerd[658]: time="2025-11-24T09:05:38.988244447Z" level=info msg="StartContainer for \"ba53f9b2ebdff2ced159f0e7ca034b202bd9776c53112341413551b23ed9b927\" returns successfully"
	
	
	==> coredns [1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54342 - 36437 "HINFO IN 4736891951819189544.4092727598254416540. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025902362s
	
	
	==> describe nodes <==
	Name:               no-preload-820576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-820576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=no-preload-820576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_05_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:05:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-820576
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:05:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:05:43 +0000   Mon, 24 Nov 2025 09:05:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:05:43 +0000   Mon, 24 Nov 2025 09:05:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:05:43 +0000   Mon, 24 Nov 2025 09:05:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:05:43 +0000   Mon, 24 Nov 2025 09:05:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-820576
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d949245b-a9ed-47a9-91d5-7d5561bd8b90
	  Boot ID:                    f052cd47-57de-4521-b5fb-139979fdced9
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-7d764666f9-b6dpn                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-no-preload-820576                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-kvm52                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-820576             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-820576    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-vz24l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-820576             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  31s   node-controller  Node no-preload-820576 event: Registered Node no-preload-820576 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [0d5c89e98d645bf73cd4c5c3f30b9202f3ec35a62f3f8d3ae062d5d623eccb24] <==
	{"level":"warn","ts":"2025-11-24T09:05:10.214265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.224023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.231321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.239174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.246909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.253281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.260214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.266550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.273527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.282603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.288554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.295211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.301519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.308085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.314261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.321387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.327694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.333832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.339908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.361663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.364933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.371238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.377811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.384070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:05:10.431908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34196","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:05:48 up  3:48,  0 user,  load average: 4.56, 3.47, 10.77
	Linux no-preload-820576 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f013ec6444310f79abf35dd005056c59b873c4bea9b56849cc31c4d45f1fd1ea] <==
	I1124 09:05:22.747683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:05:22.747935       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 09:05:22.748082       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:05:22.748098       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:05:22.748121       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:05:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:05:22.952020       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:05:22.952094       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:05:22.952107       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:05:22.952322       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:05:23.353143       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:05:23.353172       1 metrics.go:72] Registering metrics
	I1124 09:05:23.353260       1 controller.go:711] "Syncing nftables rules"
	I1124 09:05:32.951899       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:05:32.951958       1 main.go:301] handling current node
	I1124 09:05:42.952830       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:05:42.952880       1 main.go:301] handling current node
	
	
	==> kube-apiserver [942b50869b3b6efe304af13454ac7bcfcd639ee8d85edb9543534540fab1a5ac] <==
	I1124 09:05:10.909334       1 policy_source.go:248] refreshing policies
	E1124 09:05:10.932548       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1124 09:05:10.981562       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:05:10.985259       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:05:10.985502       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1124 09:05:10.990971       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:05:11.076869       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:05:11.784131       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1124 09:05:11.788179       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:05:11.788196       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1124 09:05:12.209320       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:05:12.246151       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:05:12.285780       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:05:12.290718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 09:05:12.291514       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:05:12.294940       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:05:12.826079       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:05:13.127776       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:05:13.136696       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:05:13.143569       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:05:18.481337       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:05:18.484897       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:05:18.680072       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:05:18.829415       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 09:05:45.392426       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:55526: use of closed network connection
	
	
	==> kube-controller-manager [1cc365be5ed1fbe0ff7cbef3bba9928f6de3ee57c3a2f87a37b5414ce840c1e5] <==
	I1124 09:05:17.652104       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652138       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652152       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652201       1 range_allocator.go:177] "Sending events to api server"
	I1124 09:05:17.652237       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1124 09:05:17.652242       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:05:17.652246       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652814       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.652923       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.653920       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.654009       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.654103       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.654638       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.655183       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.655289       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.655391       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.654741       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.656610       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.671052       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-820576" podCIDRs=["10.244.0.0/24"]
	I1124 09:05:17.672326       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.746153       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:17.746175       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1124 09:05:17.746182       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1124 09:05:17.746484       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:37.647634       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [d11c1a1929cbd874879bd2ca658768b3b17486a565a73f3198763d8937ab7159] <==
	I1124 09:05:19.405212       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:05:19.470704       1 shared_informer.go:370] "Waiting for caches to sync"
	I1124 09:05:19.571665       1 shared_informer.go:377] "Caches are synced"
	I1124 09:05:19.571707       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 09:05:19.571825       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:05:19.593457       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:05:19.593546       1 server_linux.go:136] "Using iptables Proxier"
	I1124 09:05:19.598806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:05:19.599327       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1124 09:05:19.599366       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:05:19.601008       1 config.go:200] "Starting service config controller"
	I1124 09:05:19.601053       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:05:19.601477       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:05:19.601494       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:05:19.601544       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:05:19.601604       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:05:19.601940       1 config.go:309] "Starting node config controller"
	I1124 09:05:19.601962       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:05:19.701650       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:05:19.701674       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:05:19.701701       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:05:19.702186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3792977e1319f5110036c4177368941dfeff0808bfb81b4f1f9accba9dc895b0] <==
	E1124 09:05:10.834797       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1124 09:05:10.834808       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1124 09:05:10.834939       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1124 09:05:10.835008       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 09:05:11.768737       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 09:05:11.770023       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1124 09:05:11.806172       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1124 09:05:11.807198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1124 09:05:11.842020       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1124 09:05:11.843143       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1124 09:05:11.962537       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1124 09:05:11.963477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1124 09:05:11.963483       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1124 09:05:11.963611       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1124 09:05:11.964324       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1124 09:05:11.964442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1124 09:05:11.969522       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1124 09:05:11.970454       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1124 09:05:12.020752       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1124 09:05:12.021838       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1124 09:05:12.026929       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1124 09:05:12.028011       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1124 09:05:12.052338       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1124 09:05:12.053203       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1124 09:05:14.726256       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Nov 24 09:05:18 no-preload-820576 kubelet[2188]: I1124 09:05:18.885392    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9hq\" (UniqueName: \"kubernetes.io/projected/967c23e8-7e42-4034-b5a2-e4cd65bc4d94-kube-api-access-vf9hq\") pod \"kindnet-kvm52\" (UID: \"967c23e8-7e42-4034-b5a2-e4cd65bc4d94\") " pod="kube-system/kindnet-kvm52"
	Nov 24 09:05:18 no-preload-820576 kubelet[2188]: I1124 09:05:18.885446    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a64a474-1e1b-411d-aea6-9d12e1d9f84e-xtables-lock\") pod \"kube-proxy-vz24l\" (UID: \"4a64a474-1e1b-411d-aea6-9d12e1d9f84e\") " pod="kube-system/kube-proxy-vz24l"
	Nov 24 09:05:18 no-preload-820576 kubelet[2188]: I1124 09:05:18.885493    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/967c23e8-7e42-4034-b5a2-e4cd65bc4d94-lib-modules\") pod \"kindnet-kvm52\" (UID: \"967c23e8-7e42-4034-b5a2-e4cd65bc4d94\") " pod="kube-system/kindnet-kvm52"
	Nov 24 09:05:18 no-preload-820576 kubelet[2188]: I1124 09:05:18.885515    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwg2f\" (UniqueName: \"kubernetes.io/projected/4a64a474-1e1b-411d-aea6-9d12e1d9f84e-kube-api-access-gwg2f\") pod \"kube-proxy-vz24l\" (UID: \"4a64a474-1e1b-411d-aea6-9d12e1d9f84e\") " pod="kube-system/kube-proxy-vz24l"
	Nov 24 09:05:20 no-preload-820576 kubelet[2188]: I1124 09:05:20.009606    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-vz24l" podStartSLOduration=2.009575988 podStartE2EDuration="2.009575988s" podCreationTimestamp="2025-11-24 09:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:20.009405568 +0000 UTC m=+7.132094701" watchObservedRunningTime="2025-11-24 09:05:20.009575988 +0000 UTC m=+7.132265063"
	Nov 24 09:05:20 no-preload-820576 kubelet[2188]: E1124 09:05:20.073715    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-820576" containerName="etcd"
	Nov 24 09:05:20 no-preload-820576 kubelet[2188]: E1124 09:05:20.442119    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-820576" containerName="kube-apiserver"
	Nov 24 09:05:22 no-preload-820576 kubelet[2188]: E1124 09:05:22.827379    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-820576" containerName="kube-scheduler"
	Nov 24 09:05:23 no-preload-820576 kubelet[2188]: I1124 09:05:23.021998    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-kvm52" podStartSLOduration=2.048567637 podStartE2EDuration="5.021980117s" podCreationTimestamp="2025-11-24 09:05:18 +0000 UTC" firstStartedPulling="2025-11-24 09:05:19.465760669 +0000 UTC m=+6.588449726" lastFinishedPulling="2025-11-24 09:05:22.439173133 +0000 UTC m=+9.561862206" observedRunningTime="2025-11-24 09:05:23.021631445 +0000 UTC m=+10.144320521" watchObservedRunningTime="2025-11-24 09:05:23.021980117 +0000 UTC m=+10.144669192"
	Nov 24 09:05:24 no-preload-820576 kubelet[2188]: E1124 09:05:24.962071    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-820576" containerName="kube-controller-manager"
	Nov 24 09:05:30 no-preload-820576 kubelet[2188]: E1124 09:05:30.074006    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-820576" containerName="etcd"
	Nov 24 09:05:30 no-preload-820576 kubelet[2188]: E1124 09:05:30.448408    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-820576" containerName="kube-apiserver"
	Nov 24 09:05:32 no-preload-820576 kubelet[2188]: E1124 09:05:32.832618    2188 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-820576" containerName="kube-scheduler"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.014716    2188 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.095714    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/144d237b-4f80-441d-867b-0ee26edd8590-tmp\") pod \"storage-provisioner\" (UID: \"144d237b-4f80-441d-867b-0ee26edd8590\") " pod="kube-system/storage-provisioner"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.095760    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr4ms\" (UniqueName: \"kubernetes.io/projected/144d237b-4f80-441d-867b-0ee26edd8590-kube-api-access-qr4ms\") pod \"storage-provisioner\" (UID: \"144d237b-4f80-441d-867b-0ee26edd8590\") " pod="kube-system/storage-provisioner"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.095795    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1-config-volume\") pod \"coredns-7d764666f9-b6dpn\" (UID: \"c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1\") " pod="kube-system/coredns-7d764666f9-b6dpn"
	Nov 24 09:05:33 no-preload-820576 kubelet[2188]: I1124 09:05:33.095897    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nf2r\" (UniqueName: \"kubernetes.io/projected/c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1-kube-api-access-4nf2r\") pod \"coredns-7d764666f9-b6dpn\" (UID: \"c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1\") " pod="kube-system/coredns-7d764666f9-b6dpn"
	Nov 24 09:05:34 no-preload-820576 kubelet[2188]: E1124 09:05:34.029028    2188 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6dpn" containerName="coredns"
	Nov 24 09:05:34 no-preload-820576 kubelet[2188]: I1124 09:05:34.041906    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-b6dpn" podStartSLOduration=16.041889167 podStartE2EDuration="16.041889167s" podCreationTimestamp="2025-11-24 09:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:34.041715133 +0000 UTC m=+21.164404209" watchObservedRunningTime="2025-11-24 09:05:34.041889167 +0000 UTC m=+21.164578242"
	Nov 24 09:05:34 no-preload-820576 kubelet[2188]: I1124 09:05:34.051548    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.051533177 podStartE2EDuration="15.051533177s" podCreationTimestamp="2025-11-24 09:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:34.051306469 +0000 UTC m=+21.173995547" watchObservedRunningTime="2025-11-24 09:05:34.051533177 +0000 UTC m=+21.174222253"
	Nov 24 09:05:35 no-preload-820576 kubelet[2188]: E1124 09:05:35.033151    2188 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6dpn" containerName="coredns"
	Nov 24 09:05:36 no-preload-820576 kubelet[2188]: E1124 09:05:36.035006    2188 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-b6dpn" containerName="coredns"
	Nov 24 09:05:36 no-preload-820576 kubelet[2188]: I1124 09:05:36.313607    2188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlkv\" (UniqueName: \"kubernetes.io/projected/ed19b18b-e761-4aff-8676-38be0169fca8-kube-api-access-knlkv\") pod \"busybox\" (UID: \"ed19b18b-e761-4aff-8676-38be0169fca8\") " pod="default/busybox"
	Nov 24 09:05:39 no-preload-820576 kubelet[2188]: I1124 09:05:39.053972    2188 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.835569242 podStartE2EDuration="3.053954912s" podCreationTimestamp="2025-11-24 09:05:36 +0000 UTC" firstStartedPulling="2025-11-24 09:05:36.694661156 +0000 UTC m=+23.817350210" lastFinishedPulling="2025-11-24 09:05:38.913046824 +0000 UTC m=+26.035735880" observedRunningTime="2025-11-24 09:05:39.05362003 +0000 UTC m=+26.176309106" watchObservedRunningTime="2025-11-24 09:05:39.053954912 +0000 UTC m=+26.176643986"
	
	
	==> storage-provisioner [372566a488aa6257b59eba829cf1e66299ccffe9066320bc512378d4a8f37fc3] <==
	I1124 09:05:33.518708       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:05:33.526921       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:05:33.526973       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:05:33.529762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:33.539875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:05:33.540034       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:05:33.540191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe9f1dac-6d1b-487a-9248-5f6453109d6b", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-820576_3a08bfb4-c7fa-4df8-97c3-4cc5a96f0994 became leader
	I1124 09:05:33.540287       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-820576_3a08bfb4-c7fa-4df8-97c3-4cc5a96f0994!
	W1124 09:05:33.542787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:33.546559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:05:33.641082       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-820576_3a08bfb4-c7fa-4df8-97c3-4cc5a96f0994!
	W1124 09:05:35.550005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:35.554075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:37.557403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:37.561227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:39.565032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:39.568902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:41.571752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:41.575652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:43.578893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:43.583135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:45.586509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:45.591565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:47.594402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:05:47.598874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820576 -n no-preload-820576
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-820576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-841285 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b0e3c418-2bd8-4d22-8f34-07ae172f4007] Pending
helpers_test.go:352: "busybox" [b0e3c418-2bd8-4d22-8f34-07ae172f4007] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b0e3c418-2bd8-4d22-8f34-07ae172f4007] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003670693s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-841285 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-841285
helpers_test.go:243: (dbg) docker inspect embed-certs-841285:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c",
	        "Created": "2025-11-24T09:06:13.101374533Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 715473,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:06:13.148755139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c/hosts",
	        "LogPath": "/var/lib/docker/containers/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c-json.log",
	        "Name": "/embed-certs-841285",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-841285:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-841285",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c",
	                "LowerDir": "/var/lib/docker/overlay2/4a6674e833905d19e86aef234376161d1823017660060b03112f8f644236912e-init/diff:/var/lib/docker/overlay2/a062700147ad5d1f8f2a68f70ba6ad34ea6495dd365bcb265ab17ea27961837b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a6674e833905d19e86aef234376161d1823017660060b03112f8f644236912e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a6674e833905d19e86aef234376161d1823017660060b03112f8f644236912e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a6674e833905d19e86aef234376161d1823017660060b03112f8f644236912e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-841285",
	                "Source": "/var/lib/docker/volumes/embed-certs-841285/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-841285",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-841285",
	                "name.minikube.sigs.k8s.io": "embed-certs-841285",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "67df66d701529e287730cb9bcd494fde3107ff602b70cf44fc90b796050f2eec",
	            "SandboxKey": "/var/run/docker/netns/67df66d70152",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-841285": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "878cc741640bbb1d91d845a9b685d01e89f4e862dc21c645f514f3029b1b1db2",
	                    "EndpointID": "125cc050625ae4fc4055cc1dd357d98e280c0e88627f2bd0be1b123cf15ef39d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e2:99:14:32:8f:dc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-841285",
	                        "2818f8831adf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-841285 -n embed-certs-841285
I1124 09:07:03.418240  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-841285 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-841285 logs -n 25: (1.306395892s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-203355 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo containerd config dump                                                                                                                                                                                                        │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo crio config                                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p cilium-203355                                                                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ start   │ -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ start   │ -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-128377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:05 UTC │
	│ stop    │ -p old-k8s-version-128377 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:06 UTC │
	│ addons  │ enable metrics-server -p no-preload-820576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:05 UTC │
	│ stop    │ -p no-preload-820576 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:06 UTC │
	│ start   │ -p cert-expiration-869306 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-869306 │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:06 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-128377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ start   │ -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-820576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ start   │ -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ delete  │ -p cert-expiration-869306                                                                                                                                                                                                                           │ cert-expiration-869306 │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ start   │ -p embed-certs-841285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                        │ embed-certs-841285     │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ image   │ no-preload-820576 image list --format=json                                                                                                                                                                                                          │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:06:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:06:07.483540  712609 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:06:07.483759  712609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:06:07.483768  712609 out.go:374] Setting ErrFile to fd 2...
	I1124 09:06:07.483772  712609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:06:07.484052  712609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:06:07.484663  712609 out.go:368] Setting JSON to false
	I1124 09:06:07.486191  712609 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13703,"bootTime":1763961464,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:06:07.486274  712609 start.go:143] virtualization: kvm guest
	I1124 09:06:07.488217  712609 out.go:179] * [embed-certs-841285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:06:07.489473  712609 notify.go:221] Checking for updates...
	I1124 09:06:07.489482  712609 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:06:07.490660  712609 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:06:07.492212  712609 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:07.497449  712609 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:06:07.498639  712609 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:06:07.499749  712609 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:06:07.501661  712609 config.go:182] Loaded profile config "kubernetes-upgrade-521313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:06:07.501837  712609 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:06:07.501982  712609 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:06:07.502126  712609 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:06:07.531929  712609 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:06:07.532059  712609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:06:07.625894  712609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 09:06:07.609806264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:06:07.626075  712609 docker.go:319] overlay module found
	I1124 09:06:07.628280  712609 out.go:179] * Using the docker driver based on user configuration
	I1124 09:06:07.629359  712609 start.go:309] selected driver: docker
	I1124 09:06:07.629378  712609 start.go:927] validating driver "docker" against <nil>
	I1124 09:06:07.629399  712609 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:06:07.630257  712609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:06:07.714617  712609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 09:06:07.700319261 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:06:07.715055  712609 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:06:07.715492  712609 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:06:07.716933  712609 out.go:179] * Using Docker driver with root privileges
	I1124 09:06:07.718370  712609 cni.go:84] Creating CNI manager for ""
	I1124 09:06:07.718503  712609 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:07.718517  712609 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:06:07.718614  712609 start.go:353] cluster config:
	{Name:embed-certs-841285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:06:07.720286  712609 out.go:179] * Starting "embed-certs-841285" primary control-plane node in "embed-certs-841285" cluster
	I1124 09:06:07.721255  712609 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 09:06:07.722693  712609 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:06:07.725075  712609 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1124 09:06:07.725141  712609 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1124 09:06:07.725154  712609 cache.go:65] Caching tarball of preloaded images
	I1124 09:06:07.725172  712609 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:06:07.725284  712609 preload.go:238] Found /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 09:06:07.725301  712609 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1124 09:06:07.725442  712609 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/config.json ...
	I1124 09:06:07.725514  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/config.json: {Name:mkf857cbddcb0b21a16751e4fa391cd5aacc43ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:07.755608  712609 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:06:07.755635  712609 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:06:07.755649  712609 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:06:07.755689  712609 start.go:360] acquireMachinesLock for embed-certs-841285: {Name:mkeaf1c7c2f33c7fd2227e10c2a6ab7b1478dfe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:07.755790  712609 start.go:364] duration metric: took 74.877µs to acquireMachinesLock for "embed-certs-841285"
	I1124 09:06:07.755822  712609 start.go:93] Provisioning new machine with config: &{Name:embed-certs-841285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:06:07.755914  712609 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:06:03.017927  710410 out.go:252] * Restarting existing docker container for "no-preload-820576" ...
	I1124 09:06:03.018012  710410 cli_runner.go:164] Run: docker start no-preload-820576
	I1124 09:06:03.296314  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:03.340739  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:03.363219  710410 kic.go:430] container "no-preload-820576" state is running.
	I1124 09:06:03.363630  710410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:06:03.382470  710410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:06:03.382718  710410 machine.go:94] provisionDockerMachine start ...
	I1124 09:06:03.382805  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:03.402573  710410 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:03.402831  710410 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 09:06:03.402846  710410 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:06:03.403650  710410 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59318->127.0.0.1:33078: read: connection reset by peer
	I1124 09:06:03.620863  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:03.967865  710410 cache.go:107] acquiring lock: {Name:mkbcabeb5a23ff077ffdad64c71e9fe699d94040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967876  710410 cache.go:107] acquiring lock: {Name:mk8023690ce5b18d9a1789b2f878bf92c1381799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967905  710410 cache.go:107] acquiring lock: {Name:mk1d635b72f6d026600360916178f900a450350e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967933  710410 cache.go:107] acquiring lock: {Name:mk92c82896924ab47423467b25ccd98ee4128baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967960  710410 cache.go:107] acquiring lock: {Name:mk6b573bbd33cfc3c3f77668030fb064598572fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967979  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:06:03.967992  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:06:03.967999  710410 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 158.964µs
	I1124 09:06:03.968002  710410 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 118.312µs
	I1124 09:06:03.968015  710410 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:06:03.968016  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:06:03.967919  710410 cache.go:107] acquiring lock: {Name:mkd74819cb24442927f7fb2cffd47478de40e14c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.968028  710410 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 105.4µs
	I1124 09:06:03.968040  710410 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:06:03.968009  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:06:03.968049  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:06:03.967893  710410 cache.go:107] acquiring lock: {Name:mk7f052905284f586f4f1cf24b8c34cc48e0b85b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.968055  710410 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 195.465µs
	I1124 09:06:03.968031  710410 cache.go:107] acquiring lock: {Name:mkf3a006b133f81ed32779d427a8d0a9b25f9000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.968056  710410 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 138.45µs
	I1124 09:06:03.968063  710410 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:06:03.968017  710410 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:06:03.968069  710410 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:06:03.968100  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:06:03.968108  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:06:03.968114  710410 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 227.518µs
	I1124 09:06:03.968124  710410 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:06:03.968127  710410 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 161.684µs
	I1124 09:06:03.968144  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:06:03.968151  710410 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:06:03.968152  710410 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 159.681µs
	I1124 09:06:03.968161  710410 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:06:03.968177  710410 cache.go:87] Successfully saved all images to host disk.
	I1124 09:06:06.557723  710410 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:06:06.557765  710410 ubuntu.go:182] provisioning hostname "no-preload-820576"
	I1124 09:06:06.557867  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:06.577599  710410 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:06.577813  710410 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 09:06:06.577826  710410 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-820576 && echo "no-preload-820576" | sudo tee /etc/hostname
	I1124 09:06:06.734573  710410 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:06:06.734721  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:06.754862  710410 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:06.755130  710410 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 09:06:06.755162  710410 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820576/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:06:06.920799  710410 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:06:06.920836  710410 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:06:06.920867  710410 ubuntu.go:190] setting up certificates
	I1124 09:06:06.920889  710410 provision.go:84] configureAuth start
	I1124 09:06:06.920981  710410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:06:06.941231  710410 provision.go:143] copyHostCerts
	I1124 09:06:06.941304  710410 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:06:06.941329  710410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:06:06.941399  710410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:06:06.941559  710410 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:06:06.941571  710410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:06:06.941616  710410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:06:06.941718  710410 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:06:06.941733  710410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:06:06.941774  710410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:06:06.941867  710410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.no-preload-820576 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820576]
	I1124 09:06:06.972955  710410 provision.go:177] copyRemoteCerts
	I1124 09:06:06.973028  710410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:06:06.973077  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:06.996308  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.101497  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:06:07.119671  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:06:07.139380  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:06:07.159230  710410 provision.go:87] duration metric: took 238.32094ms to configureAuth
	I1124 09:06:07.159268  710410 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:06:07.159536  710410 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:06:07.159564  710410 machine.go:97] duration metric: took 3.776825081s to provisionDockerMachine
	I1124 09:06:07.159576  710410 start.go:293] postStartSetup for "no-preload-820576" (driver="docker")
	I1124 09:06:07.159592  710410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:06:07.159671  710410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:06:07.159728  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:07.179270  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.286516  710410 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:06:07.290562  710410 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:06:07.290599  710410 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:06:07.290610  710410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:06:07.290663  710410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:06:07.290742  710410 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:06:07.290873  710410 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:06:07.299309  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:07.317122  710410 start.go:296] duration metric: took 157.527884ms for postStartSetup
	I1124 09:06:07.317211  710410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:06:07.317246  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:07.336146  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.438137  710410 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:06:07.443360  710410 fix.go:56] duration metric: took 4.447269608s for fixHost
	I1124 09:06:07.443392  710410 start.go:83] releasing machines lock for "no-preload-820576", held for 4.447325578s
	I1124 09:06:07.443493  710410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:06:07.464550  710410 ssh_runner.go:195] Run: cat /version.json
	I1124 09:06:07.464611  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:07.464648  710410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:06:07.464732  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:07.485402  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.487047  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.594978  710410 ssh_runner.go:195] Run: systemctl --version
	I1124 09:06:07.681513  710410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:06:07.688502  710410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:06:07.688582  710410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:06:07.701206  710410 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:06:07.701281  710410 start.go:496] detecting cgroup driver to use...
	I1124 09:06:07.701318  710410 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:06:07.701495  710410 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:06:07.729598  710410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:06:07.750258  710410 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:06:07.750315  710410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:06:07.775934  710410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:06:06.598684  709503 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:06:06.617474  709503 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:06:06.622019  709503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:06.633478  709503 kubeadm.go:884] updating cluster {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:06:06.633622  709503 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:06:06.633672  709503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:06.661265  709503 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:06.661287  709503 containerd.go:534] Images already preloaded, skipping extraction
	I1124 09:06:06.661334  709503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:06.689156  709503 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:06.689178  709503 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:06:06.689192  709503 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 containerd true true} ...
	I1124 09:06:06.689295  709503 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-128377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:06:06.689357  709503 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:06:06.717670  709503 cni.go:84] Creating CNI manager for ""
	I1124 09:06:06.717695  709503 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:06.717716  709503 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:06:06.717743  709503 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-128377 NodeName:old-k8s-version-128377 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:06:06.717921  709503 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-128377"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:06:06.718016  709503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 09:06:06.726942  709503 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:06:06.727012  709503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:06:06.735521  709503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1124 09:06:06.749766  709503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:06:06.776782  709503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1124 09:06:06.801084  709503 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:06:06.805881  709503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:06.818254  709503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:06.922245  709503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:06.949494  709503 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377 for IP: 192.168.103.2
	I1124 09:06:06.949517  709503 certs.go:195] generating shared ca certs ...
	I1124 09:06:06.949537  709503 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:06.949709  709503 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:06:06.949772  709503 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:06:06.949785  709503 certs.go:257] generating profile certs ...
	I1124 09:06:06.949913  709503 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key
	I1124 09:06:06.950010  709503 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1
	I1124 09:06:06.950061  709503 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key
	I1124 09:06:06.950193  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:06:06.950232  709503 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:06:06.950247  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:06:06.950291  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:06:06.950335  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:06:06.950367  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:06:06.950428  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:06.951361  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:06:06.972328  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:06:06.997133  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:06:07.017763  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:06:07.042410  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 09:06:07.067015  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:06:07.088536  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:06:07.106991  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:06:07.125258  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:06:07.145094  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:06:07.165370  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:06:07.186071  709503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:06:07.202024  709503 ssh_runner.go:195] Run: openssl version
	I1124 09:06:07.209376  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:06:07.219680  709503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:06:07.224015  709503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:06:07.224071  709503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:06:07.262906  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:06:07.279541  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:06:07.289657  709503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:07.294353  709503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:07.294414  709503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:07.334199  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:06:07.343587  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:06:07.353579  709503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:06:07.358206  709503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:06:07.358275  709503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:06:07.395934  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:06:07.404703  709503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:06:07.408649  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:06:07.445334  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:06:07.488909  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:06:07.546273  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:06:07.608976  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:06:07.680011  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:06:07.743611  709503 kubeadm.go:401] StartCluster: {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:06:07.743756  709503 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:06:07.743847  709503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:06:07.805661  709503 cri.go:89] found id: "2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a"
	I1124 09:06:07.805694  709503 cri.go:89] found id: "386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae"
	I1124 09:06:07.805700  709503 cri.go:89] found id: "14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697"
	I1124 09:06:07.805704  709503 cri.go:89] found id: "5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537"
	I1124 09:06:07.805709  709503 cri.go:89] found id: "a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5"
	I1124 09:06:07.805714  709503 cri.go:89] found id: "a9a5857553e67019e47641c1970bb0d5555afd6b608c94a94501dd485efac0c4"
	I1124 09:06:07.805718  709503 cri.go:89] found id: "818537e08c0605796949e72c73a034b7d5f104ce598d4a12f0ed8bf30de9c646"
	I1124 09:06:07.805722  709503 cri.go:89] found id: "370631aaaf577fb6a343282108f71bb03e72ef6024de9d9f8e2a2eeb7e16e746"
	I1124 09:06:07.805726  709503 cri.go:89] found id: "f5eddecfb179fe94de6b3892600fc1870efa5679c82874d72a3b301753e6f7d4"
	I1124 09:06:07.805736  709503 cri.go:89] found id: "5d9ec22e03b8b0446d34a5b300037519eb0aa0be6b1e6c451907abb271f71839"
	I1124 09:06:07.805740  709503 cri.go:89] found id: "842bd9db2d84b65b054e4b006bfb9c11b98ac3cdcbe13cd821183480cd046d8a"
	I1124 09:06:07.805744  709503 cri.go:89] found id: "8df3112d99751cf0ed66add055e0df50e3c944dbb66b787e2e3ae37efbec7d4e"
	I1124 09:06:07.805748  709503 cri.go:89] found id: ""
	I1124 09:06:07.805800  709503 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 09:06:07.858533  709503 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697","pid":953,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697/rootfs","created":"2025-11-24T09:06:07.767682233Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.0","io.kubernetes.cri.sandbox-id":"ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"97465a4cd4188931f10ea28e1a2907e2"},"owner":"root"},{"ociVersion":
"1.2.1","id":"2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a","pid":969,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a/rootfs","created":"2025-11-24T09:06:07.770209322Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.0","io.kubernetes.cri.sandbox-id":"7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"551952eef6cb6e731333d664adafec03"},"owner":"root"},{"ociVersion":"1.2.1","id":"386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae","pid":952,"status":"
running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae/rootfs","created":"2025-11-24T09:06:07.75785436Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b146c92afb8c14021010a6f689d3581"},"owner":"root"},{"ociVersion":"1.2.1","id":"5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537","pid":938,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c77
6b537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537/rootfs","created":"2025-11-24T09:06:07.752935382Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.0","io.kubernetes.cri.sandbox-id":"94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"869d206dcde1c4f8d5d525ee4860a861"},"owner":"root"},{"ociVersion":"1.2.1","id":"7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066","pid":861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a9ceda96c311eb5009b83f30ee624
3b2d488849704e328dffef8c760fbb8066/rootfs","created":"2025-11-24T09:06:07.629953763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-128377_551952eef6cb6e731333d664adafec03","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"551952eef6cb6e731333d664adafec03"},"owner":"root"},{"ociVersion":"1.2.1","id":"94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9","pid":812,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9/rootfs","created":"2025-11-24T09:06:07.585036749Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-128377_869d206dcde1c4f8d5d525ee4860a861","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"869d206dcde1c4f8d5d525ee4860a861"},"owner":"root"},{"ociVersion":"1.2.1","id":
"ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b","pid":840,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b/rootfs","created":"2025-11-24T09:06:07.601657583Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-128377_97465a4cd4188931f10ea28e1a2907e2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-ve
rsion-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"97465a4cd4188931f10ea28e1a2907e2"},"owner":"root"},{"ociVersion":"1.2.1","id":"e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf","pid":868,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf/rootfs","created":"2025-11-24T09:06:07.628088181Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8
s-version-128377_1b146c92afb8c14021010a6f689d3581","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b146c92afb8c14021010a6f689d3581"},"owner":"root"}]
	I1124 09:06:07.858785  709503 cri.go:126] list returned 8 containers
	I1124 09:06:07.858815  709503 cri.go:129] container: {ID:14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697 Status:running}
	I1124 09:06:07.858852  709503 cri.go:135] skipping {14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697 running}: state = "running", want "paused"
	I1124 09:06:07.858872  709503 cri.go:129] container: {ID:2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a Status:running}
	I1124 09:06:07.858888  709503 cri.go:135] skipping {2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a running}: state = "running", want "paused"
	I1124 09:06:07.858896  709503 cri.go:129] container: {ID:386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae Status:running}
	I1124 09:06:07.858908  709503 cri.go:135] skipping {386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae running}: state = "running", want "paused"
	I1124 09:06:07.858915  709503 cri.go:129] container: {ID:5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537 Status:running}
	I1124 09:06:07.858922  709503 cri.go:135] skipping {5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537 running}: state = "running", want "paused"
	I1124 09:06:07.858927  709503 cri.go:129] container: {ID:7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066 Status:running}
	I1124 09:06:07.858944  709503 cri.go:131] skipping 7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066 - not in ps
	I1124 09:06:07.858958  709503 cri.go:129] container: {ID:94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9 Status:running}
	I1124 09:06:07.858965  709503 cri.go:131] skipping 94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9 - not in ps
	I1124 09:06:07.858970  709503 cri.go:129] container: {ID:ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b Status:running}
	I1124 09:06:07.858975  709503 cri.go:131] skipping ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b - not in ps
	I1124 09:06:07.858980  709503 cri.go:129] container: {ID:e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf Status:running}
	I1124 09:06:07.858986  709503 cri.go:131] skipping e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf - not in ps
	I1124 09:06:07.859050  709503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:06:07.892125  709503 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:06:07.892148  709503 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:06:07.892207  709503 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:06:07.909145  709503 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:06:07.909911  709503 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-128377" does not appear in /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:07.910245  709503 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-435860/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-128377" cluster setting kubeconfig missing "old-k8s-version-128377" context setting]
	I1124 09:06:07.911503  709503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:07.914069  709503 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:06:07.930566  709503 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1124 09:06:07.930786  709503 kubeadm.go:602] duration metric: took 38.609119ms to restartPrimaryControlPlane
	I1124 09:06:07.930903  709503 kubeadm.go:403] duration metric: took 187.309002ms to StartCluster
	I1124 09:06:07.930972  709503 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:07.931189  709503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:07.933815  709503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:07.934627  709503 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:06:07.934764  709503 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:06:07.934918  709503 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-128377"
	I1124 09:06:07.934939  709503 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-128377"
	W1124 09:06:07.934947  709503 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:06:07.934979  709503 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:06:07.934730  709503 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:06:07.935328  709503 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-128377"
	I1124 09:06:07.935353  709503 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-128377"
	W1124 09:06:07.935431  709503 addons.go:248] addon metrics-server should already be in state true
	I1124 09:06:07.935543  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.935314  709503 addons.go:70] Setting dashboard=true in profile "old-k8s-version-128377"
	I1124 09:06:07.935763  709503 addons.go:239] Setting addon dashboard=true in "old-k8s-version-128377"
	W1124 09:06:07.935776  709503 addons.go:248] addon dashboard should already be in state true
	I1124 09:06:07.935836  709503 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:06:07.935298  709503 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-128377"
	I1124 09:06:07.935911  709503 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-128377"
	I1124 09:06:07.936129  709503 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:06:07.936420  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.936429  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.937728  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.938151  709503 out.go:179] * Verifying Kubernetes components...
	I1124 09:06:07.939350  709503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:07.968860  709503 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-128377"
	W1124 09:06:07.968932  709503 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:06:07.968967  709503 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:06:07.969542  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.970612  709503 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:06:07.971688  709503 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:07.971709  709503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:06:07.971776  709503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:06:07.982548  709503 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:06:07.983751  709503 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:06:07.984943  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:06:07.984964  709503 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:06:07.985032  709503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:06:07.989064  709503 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 09:06:07.798355  710410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:06:07.959783  710410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:06:08.169711  710410 docker.go:234] disabling docker service ...
	I1124 09:06:08.170079  710410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:06:08.192752  710410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:06:08.217711  710410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:06:08.371537  710410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:06:08.520292  710410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:06:08.542357  710410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:06:08.567348  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:08.935352  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:06:08.946105  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:06:08.956076  710410 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:06:08.956151  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:06:08.965899  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:06:08.975290  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:06:08.984942  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:06:08.994561  710410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:06:09.003383  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:06:09.013261  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:06:09.023845  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:06:09.033552  710410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:06:09.041637  710410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:06:09.049555  710410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:09.149233  710410 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:06:09.260304  710410 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:06:09.260382  710410 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:06:09.265056  710410 start.go:564] Will wait 60s for crictl version
	I1124 09:06:09.265129  710410 ssh_runner.go:195] Run: which crictl
	I1124 09:06:09.269253  710410 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:06:09.298618  710410 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:06:09.298686  710410 ssh_runner.go:195] Run: containerd --version
	I1124 09:06:09.322033  710410 ssh_runner.go:195] Run: containerd --version
	I1124 09:06:09.346867  710410 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1124 09:06:05.478330  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:07.990188  709503 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:06:07.990211  709503 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:06:07.990277  709503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:06:08.019953  709503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:06:08.022995  709503 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:08.023018  709503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:06:08.023081  709503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:06:08.038684  709503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:06:08.047506  709503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:06:08.074610  709503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:06:08.213005  709503 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:06:08.213118  709503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 09:06:08.218819  709503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:08.229963  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:06:08.229989  709503 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:06:08.247835  709503 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:06:08.254634  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:06:08.254660  709503 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:06:08.255027  709503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:08.295607  709503 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:06:08.295682  709503 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:06:08.298266  709503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:08.311154  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:06:08.311197  709503 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:06:08.333308  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:06:08.333347  709503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:06:08.350278  709503 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:06:08.350304  709503 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:06:08.380567  709503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:06:08.382336  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:06:08.382375  709503 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:06:08.406934  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:06:08.406969  709503 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:06:08.450715  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:06:08.450745  709503 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:06:08.512388  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:06:08.512416  709503 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:06:08.534866  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:06:08.534894  709503 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:06:08.568308  709503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:06:10.558760  709503 node_ready.go:49] node "old-k8s-version-128377" is "Ready"
	I1124 09:06:10.558793  709503 node_ready.go:38] duration metric: took 2.310917996s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:06:10.558809  709503 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:06:10.558874  709503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:06:09.348190  710410 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:06:09.365511  710410 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:06:09.369983  710410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:09.380785  710410 kubeadm.go:884] updating cluster {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:06:09.381014  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:09.698668  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:10.063688  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:10.401786  710410 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:06:10.401880  710410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:10.446642  710410 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:10.446676  710410 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:06:10.446687  710410 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1124 09:06:10.446829  710410 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:06:10.446907  710410 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:06:10.479317  710410 cni.go:84] Creating CNI manager for ""
	I1124 09:06:10.479342  710410 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:10.479365  710410 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:06:10.479414  710410 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820576 NodeName:no-preload-820576 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:06:10.479636  710410 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-820576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:06:10.479724  710410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:06:10.489536  710410 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:06:10.489618  710410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:06:10.498562  710410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1124 09:06:10.514039  710410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:06:10.530535  710410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1124 09:06:10.557382  710410 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:06:10.563118  710410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:10.589362  710410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:10.740319  710410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:10.771888  710410 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576 for IP: 192.168.85.2
	I1124 09:06:10.771931  710410 certs.go:195] generating shared ca certs ...
	I1124 09:06:10.771953  710410 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:10.773114  710410 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:06:10.773247  710410 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:06:10.773282  710410 certs.go:257] generating profile certs ...
	I1124 09:06:10.773446  710410 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key
	I1124 09:06:10.773567  710410 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632
	I1124 09:06:10.773625  710410 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key
	I1124 09:06:10.773794  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:06:10.773841  710410 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:06:10.773865  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:06:10.773909  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:06:10.773946  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:06:10.773982  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:06:10.774051  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:10.774961  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:06:10.800274  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:06:10.824284  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:06:10.863611  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:06:10.896300  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:06:10.937202  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:06:10.967290  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:06:10.990246  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:06:11.011641  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:06:11.032149  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:06:11.070004  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:06:11.098006  710410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:06:11.112693  710410 ssh_runner.go:195] Run: openssl version
	I1124 09:06:11.120012  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:06:11.133685  710410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:06:11.142019  710410 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:06:11.142082  710410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:06:11.199392  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:06:11.208974  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:06:11.219230  710410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:06:11.224709  710410 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:06:11.224787  710410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:06:11.263304  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:06:11.273452  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:06:11.285214  710410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:11.290634  710410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:11.290697  710410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:11.334365  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:06:11.343999  710410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:06:11.349716  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:06:11.393022  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:06:11.429451  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:06:11.467433  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:06:11.523563  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:06:11.581537  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:06:11.715888  710410 kubeadm.go:401] StartCluster: {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:06:11.715993  710410 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:06:11.716044  710410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:06:11.746839  710410 cri.go:89] found id: "1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5"
	I1124 09:06:11.746867  710410 cri.go:89] found id: "372566a488aa6257b59eba829cf1e66299ccffe9066320bc512378d4a8f37fc3"
	I1124 09:06:11.746872  710410 cri.go:89] found id: "f013ec6444310f79abf35dd005056c59b873c4bea9b56849cc31c4d45f1fd1ea"
	I1124 09:06:11.746876  710410 cri.go:89] found id: "d11c1a1929cbd874879bd2ca658768b3b17486a565a73f3198763d8937ab7159"
	I1124 09:06:11.746879  710410 cri.go:89] found id: "3792977e1319f5110036c4177368941dfeff0808bfb81b4f1f9accba9dc895b0"
	I1124 09:06:11.746882  710410 cri.go:89] found id: "1cc365be5ed1fbe0ff7cbef3bba9928f6de3ee57c3a2f87a37b5414ce840c1e5"
	I1124 09:06:11.746885  710410 cri.go:89] found id: "942b50869b3b6efe304af13454ac7bcfcd639ee8d85edb9543534540fab1a5ac"
	I1124 09:06:11.746887  710410 cri.go:89] found id: "0d5c89e98d645bf73cd4c5c3f30b9202f3ec35a62f3f8d3ae062d5d623eccb24"
	I1124 09:06:11.746892  710410 cri.go:89] found id: ""
	I1124 09:06:11.746945  710410 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1124 09:06:11.761985  710410 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:06:11Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1124 09:06:11.762058  710410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:06:11.775299  710410 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:06:11.775320  710410 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:06:11.775372  710410 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:06:11.787178  710410 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:06:11.788096  710410 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-820576" does not appear in /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:11.788567  710410 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-435860/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-820576" cluster setting kubeconfig missing "no-preload-820576" context setting]
	I1124 09:06:11.789318  710410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:11.819317  710410 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:06:11.829219  710410 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:06:11.829255  710410 kubeadm.go:602] duration metric: took 53.926233ms to restartPrimaryControlPlane
	I1124 09:06:11.829264  710410 kubeadm.go:403] duration metric: took 113.387483ms to StartCluster
	I1124 09:06:11.829283  710410 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:11.829358  710410 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:11.830779  710410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:11.881377  710410 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:06:11.881518  710410 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:06:11.881659  710410 addons.go:70] Setting storage-provisioner=true in profile "no-preload-820576"
	I1124 09:06:11.881685  710410 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:06:11.881695  710410 addons.go:70] Setting metrics-server=true in profile "no-preload-820576"
	I1124 09:06:11.881692  710410 addons.go:70] Setting default-storageclass=true in profile "no-preload-820576"
	I1124 09:06:11.881713  710410 addons.go:239] Setting addon metrics-server=true in "no-preload-820576"
	W1124 09:06:11.881721  710410 addons.go:248] addon metrics-server should already be in state true
	I1124 09:06:11.881716  710410 addons.go:70] Setting dashboard=true in profile "no-preload-820576"
	I1124 09:06:11.881690  710410 addons.go:239] Setting addon storage-provisioner=true in "no-preload-820576"
	I1124 09:06:11.881718  710410 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820576"
	I1124 09:06:11.881736  710410 addons.go:239] Setting addon dashboard=true in "no-preload-820576"
	W1124 09:06:11.881743  710410 addons.go:248] addon storage-provisioner should already be in state true
	W1124 09:06:11.881745  710410 addons.go:248] addon dashboard should already be in state true
	I1124 09:06:11.881753  710410 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:06:11.881768  710410 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:06:11.881774  710410 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:06:11.882069  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.882237  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.882245  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.882250  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.939425  710410 out.go:179] * Verifying Kubernetes components...
	I1124 09:06:11.939931  710410 addons.go:239] Setting addon default-storageclass=true in "no-preload-820576"
	W1124 09:06:11.940692  710410 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:06:11.940739  710410 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:06:11.941244  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.941264  710410 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:06:11.941301  710410 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 09:06:11.941329  710410 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:06:11.946558  710410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:11.948132  710410 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:06:11.948155  710410 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:06:11.948179  710410 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:11.948196  710410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:06:11.948220  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:11.948266  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:11.953192  710410 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:06:07.757449  712609 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:06:07.757732  712609 start.go:159] libmachine.API.Create for "embed-certs-841285" (driver="docker")
	I1124 09:06:07.757769  712609 client.go:173] LocalClient.Create starting
	I1124 09:06:07.757822  712609 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:06:07.757857  712609 main.go:143] libmachine: Decoding PEM data...
	I1124 09:06:07.757876  712609 main.go:143] libmachine: Parsing certificate...
	I1124 09:06:07.757933  712609 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:06:07.757954  712609 main.go:143] libmachine: Decoding PEM data...
	I1124 09:06:07.757966  712609 main.go:143] libmachine: Parsing certificate...
	I1124 09:06:07.758289  712609 cli_runner.go:164] Run: docker network inspect embed-certs-841285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:06:07.786287  712609 cli_runner.go:211] docker network inspect embed-certs-841285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:06:07.786412  712609 network_create.go:284] running [docker network inspect embed-certs-841285] to gather additional debugging logs...
	I1124 09:06:07.786444  712609 cli_runner.go:164] Run: docker network inspect embed-certs-841285
	W1124 09:06:07.812736  712609 cli_runner.go:211] docker network inspect embed-certs-841285 returned with exit code 1
	I1124 09:06:07.812786  712609 network_create.go:287] error running [docker network inspect embed-certs-841285]: docker network inspect embed-certs-841285: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-841285 not found
	I1124 09:06:07.812805  712609 network_create.go:289] output of [docker network inspect embed-certs-841285]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-841285 not found
	
	** /stderr **
	I1124 09:06:07.812915  712609 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:06:07.838220  712609 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:06:07.839216  712609 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:06:07.840271  712609 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:06:07.841370  712609 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:06:07.842376  712609 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7957ce7dc9ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:7d:52:b6:17:25} reservation:<nil>}
	I1124 09:06:07.843628  712609 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d42cf0}
	I1124 09:06:07.843668  712609 network_create.go:124] attempt to create docker network embed-certs-841285 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 09:06:07.843740  712609 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-841285 embed-certs-841285
	I1124 09:06:07.940716  712609 network_create.go:108] docker network embed-certs-841285 192.168.94.0/24 created
	I1124 09:06:07.940787  712609 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-841285" container
	I1124 09:06:07.940887  712609 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:06:07.991813  712609 cli_runner.go:164] Run: docker volume create embed-certs-841285 --label name.minikube.sigs.k8s.io=embed-certs-841285 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:06:08.061119  712609 oci.go:103] Successfully created a docker volume embed-certs-841285
	I1124 09:06:08.061364  712609 cli_runner.go:164] Run: docker run --rm --name embed-certs-841285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-841285 --entrypoint /usr/bin/test -v embed-certs-841285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:06:08.685239  712609 oci.go:107] Successfully prepared a docker volume embed-certs-841285
	I1124 09:06:08.685329  712609 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1124 09:06:08.685345  712609 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:06:08.685429  712609 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-841285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:06:11.957004  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:06:11.957029  710410 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:06:11.957098  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:11.977858  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:11.980623  710410 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:11.980648  710410 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:06:11.980706  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:11.987358  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:11.995845  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:12.012731  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:12.116424  710410 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:06:12.116446  710410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 09:06:12.124317  710410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:12.140300  710410 node_ready.go:35] waiting up to 6m0s for node "no-preload-820576" to be "Ready" ...
	I1124 09:06:12.145652  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:06:12.145676  710410 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:06:12.145652  710410 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:06:12.145723  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:12.145726  710410 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:06:12.145895  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:12.167372  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:06:12.167400  710410 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:06:12.188298  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:06:12.188336  710410 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:06:12.189071  710410 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:06:12.189091  710410 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:06:12.208709  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:06:12.208735  710410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:06:12.212245  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:06:12.251739  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:06:12.251780  710410 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1124 09:06:12.254669  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.254725  710410 retry.go:31] will retry after 267.520426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:06:12.254757  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.254783  710410 retry.go:31] will retry after 187.263022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.267555  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:06:12.267581  710410 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1124 09:06:12.271523  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.271557  710410 retry.go:31] will retry after 197.857566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.280900  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:06:12.280922  710410 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:06:12.293352  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:06:12.293374  710410 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:06:12.305732  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:06:12.305754  710410 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:06:12.393825  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:06:12.442360  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1124 09:06:12.459398  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.459609  710410 retry.go:31] will retry after 128.110746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.470528  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1124 09:06:12.515023  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.515066  710410 retry.go:31] will retry after 492.443212ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.523209  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1124 09:06:12.537365  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.537415  710410 retry.go:31] will retry after 547.534652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:06:12.576068  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.576105  710410 retry.go:31] will retry after 490.57105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.588191  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1124 09:06:12.645758  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.645813  710410 retry.go:31] will retry after 546.072247ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:11.569200  709503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.314051805s)
	I1124 09:06:12.034820  709503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.736518516s)
	I1124 09:06:12.154054  709503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.773444144s)
	I1124 09:06:12.154100  709503 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-128377"
	I1124 09:06:13.064354  709503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.495850323s)
	I1124 09:06:13.064429  709503 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.505387882s)
	I1124 09:06:13.064449  709503 api_server.go:72] duration metric: took 5.129072136s to wait for apiserver process to appear ...
	I1124 09:06:13.064626  709503 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:06:13.064742  709503 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:06:13.067049  709503 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-128377 addons enable metrics-server
	
	I1124 09:06:13.068589  709503 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1124 09:06:10.479269  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:06:10.479328  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:10.479389  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:10.510533  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:10.510577  685562 cri.go:89] found id: "1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:06:10.510583  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:10.510586  685562 cri.go:89] found id: ""
	I1124 09:06:10.510593  685562 logs.go:282] 3 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:10.510670  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.515076  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.519239  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.523408  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:10.523496  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:10.573118  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:10.573140  685562 cri.go:89] found id: ""
	I1124 09:06:10.573151  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:10.573203  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.580440  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:10.580552  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:10.633397  685562 cri.go:89] found id: ""
	I1124 09:06:10.633453  685562 logs.go:282] 0 containers: []
	W1124 09:06:10.633475  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:10.633493  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:10.633564  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:10.690354  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:10.690382  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:10.690413  685562 cri.go:89] found id: ""
	I1124 09:06:10.690423  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:10.690531  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.695963  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.701490  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:10.701564  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:10.737302  685562 cri.go:89] found id: ""
	I1124 09:06:10.737334  685562 logs.go:282] 0 containers: []
	W1124 09:06:10.737346  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:10.737355  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:10.737429  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:10.775391  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:10.775414  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:10.775432  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:10.775437  685562 cri.go:89] found id: ""
	I1124 09:06:10.775447  685562 logs.go:282] 3 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:10.775534  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.781150  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.786536  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.792009  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:10.792081  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:10.834058  685562 cri.go:89] found id: ""
	I1124 09:06:10.834086  685562 logs.go:282] 0 containers: []
	W1124 09:06:10.834096  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:10.834105  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:10.834176  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:10.878003  685562 cri.go:89] found id: ""
	I1124 09:06:10.878038  685562 logs.go:282] 0 containers: []
	W1124 09:06:10.878049  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:10.878062  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:10.878087  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:10.933766  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:10.933861  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:10.979203  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:10.979242  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:11.070829  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:11.070863  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 09:06:13.007920  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:13.067827  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:13.085967  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1124 09:06:13.158832  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:13.158873  710410 retry.go:31] will retry after 555.195364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:13.193126  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1124 09:06:13.228891  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:13.228930  710410 retry.go:31] will retry after 606.090345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:13.714698  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:13.835800  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:14.767388  710410 node_ready.go:49] node "no-preload-820576" is "Ready"
	I1124 09:06:14.767429  710410 node_ready.go:38] duration metric: took 2.627095095s for node "no-preload-820576" to be "Ready" ...
	I1124 09:06:14.767447  710410 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:06:14.767526  710410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:06:15.446416  710410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.360392286s)
	I1124 09:06:15.446753  710410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.253580665s)
	I1124 09:06:15.447060  710410 addons.go:495] Verifying addon metrics-server=true in "no-preload-820576"
	I1124 09:06:15.448304  710410 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-820576 addons enable metrics-server
	
	I1124 09:06:15.502159  710410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.787411152s)
	I1124 09:06:15.502312  710410 api_server.go:72] duration metric: took 3.620869952s to wait for apiserver process to appear ...
	I1124 09:06:15.502330  710410 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:06:15.502354  710410 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 09:06:15.502435  710410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.666417463s)
	I1124 09:06:15.507693  710410 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:06:15.507720  710410 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:06:15.510070  710410 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I1124 09:06:13.069584  709503 addons.go:530] duration metric: took 5.134824432s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1124 09:06:13.074420  709503 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1124 09:06:13.074441  709503 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1124 09:06:13.565056  709503 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:06:13.573074  709503 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:06:13.576874  709503 api_server.go:141] control plane version: v1.28.0
	I1124 09:06:13.576905  709503 api_server.go:131] duration metric: took 512.183788ms to wait for apiserver health ...
	I1124 09:06:13.576916  709503 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:06:13.584383  709503 system_pods.go:59] 9 kube-system pods found
	I1124 09:06:13.584495  709503 system_pods.go:61] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:13.584512  709503 system_pods.go:61] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:06:13.584522  709503 system_pods.go:61] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:06:13.584532  709503 system_pods.go:61] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:06:13.584541  709503 system_pods.go:61] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:06:13.584561  709503 system_pods.go:61] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:06:13.584568  709503 system_pods.go:61] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:06:13.584576  709503 system_pods.go:61] "metrics-server-57f55c9bc5-77qfh" [cdcc0048-22cc-48f4-be39-99715f4aaa66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:06:13.584583  709503 system_pods.go:61] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:13.584592  709503 system_pods.go:74] duration metric: took 7.668146ms to wait for pod list to return data ...
	I1124 09:06:13.584602  709503 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:06:13.588282  709503 default_sa.go:45] found service account: "default"
	I1124 09:06:13.588332  709503 default_sa.go:55] duration metric: took 3.724838ms for default service account to be created ...
	I1124 09:06:13.588350  709503 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:06:13.592454  709503 system_pods.go:86] 9 kube-system pods found
	I1124 09:06:13.592506  709503 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:13.592520  709503 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:06:13.592530  709503 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:06:13.592541  709503 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:06:13.592554  709503 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:06:13.592567  709503 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:06:13.592578  709503 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:06:13.592588  709503 system_pods.go:89] "metrics-server-57f55c9bc5-77qfh" [cdcc0048-22cc-48f4-be39-99715f4aaa66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:06:13.592606  709503 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:13.592616  709503 system_pods.go:126] duration metric: took 4.252001ms to wait for k8s-apps to be running ...
	I1124 09:06:13.592626  709503 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:06:13.592674  709503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:06:13.612442  709503 system_svc.go:56] duration metric: took 19.805358ms WaitForService to wait for kubelet
	I1124 09:06:13.612506  709503 kubeadm.go:587] duration metric: took 5.677127372s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:06:13.612540  709503 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:06:13.615980  709503 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:06:13.616017  709503 node_conditions.go:123] node cpu capacity is 8
	I1124 09:06:13.616037  709503 node_conditions.go:105] duration metric: took 3.491408ms to run NodePressure ...
	I1124 09:06:13.616060  709503 start.go:242] waiting for startup goroutines ...
	I1124 09:06:13.616072  709503 start.go:247] waiting for cluster config update ...
	I1124 09:06:13.616087  709503 start.go:256] writing updated cluster config ...
	I1124 09:06:13.616411  709503 ssh_runner.go:195] Run: rm -f paused
	I1124 09:06:13.622586  709503 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:13.628591  709503 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:06:15.638301  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:12.955135  712609 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-841285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.269650036s)
	I1124 09:06:12.955177  712609 kic.go:203] duration metric: took 4.269827271s to extract preloaded images to volume ...
	W1124 09:06:12.955271  712609 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:06:12.955307  712609 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:06:12.955360  712609 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:06:13.076133  712609 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-841285 --name embed-certs-841285 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-841285 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-841285 --network embed-certs-841285 --ip 192.168.94.2 --volume embed-certs-841285:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:06:13.540475  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Running}}
	I1124 09:06:13.565052  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:13.591297  712609 cli_runner.go:164] Run: docker exec embed-certs-841285 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:06:13.656882  712609 oci.go:144] the created container "embed-certs-841285" has a running status.
	I1124 09:06:13.656945  712609 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa...
	I1124 09:06:13.819842  712609 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:06:13.853629  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:13.880952  712609 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:06:13.880975  712609 kic_runner.go:114] Args: [docker exec --privileged embed-certs-841285 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:06:13.938355  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:13.964024  712609 machine.go:94] provisionDockerMachine start ...
	I1124 09:06:13.964165  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:13.997714  712609 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:13.998308  712609 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 09:06:13.998364  712609 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:06:13.999301  712609 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54278->127.0.0.1:33083: read: connection reset by peer
	I1124 09:06:17.148399  712609 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-841285
	
	I1124 09:06:17.148432  712609 ubuntu.go:182] provisioning hostname "embed-certs-841285"
	I1124 09:06:17.148523  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:17.169142  712609 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:17.169368  712609 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 09:06:17.169382  712609 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-841285 && echo "embed-certs-841285" | sudo tee /etc/hostname
	I1124 09:06:17.328945  712609 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-841285
	
	I1124 09:06:17.329026  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:17.346388  712609 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:17.346664  712609 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 09:06:17.346683  712609 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-841285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-841285/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-841285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:06:15.511184  710410 addons.go:530] duration metric: took 3.629676818s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I1124 09:06:16.002642  710410 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 09:06:16.009012  710410 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 09:06:16.010266  710410 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:06:16.010304  710410 api_server.go:131] duration metric: took 507.960092ms to wait for apiserver health ...
	I1124 09:06:16.010318  710410 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:06:16.014692  710410 system_pods.go:59] 9 kube-system pods found
	I1124 09:06:16.014742  710410 system_pods.go:61] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:16.014756  710410 system_pods.go:61] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:06:16.014777  710410 system_pods.go:61] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:06:16.014826  710410 system_pods.go:61] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:06:16.014841  710410 system_pods.go:61] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:06:16.014851  710410 system_pods.go:61] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:06:16.014864  710410 system_pods.go:61] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:06:16.014872  710410 system_pods.go:61] "metrics-server-5d785b57d4-pd54z" [09e6bd80-a8d1-4b28-b18a-094e3667ef9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:06:16.014890  710410 system_pods.go:61] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:16.014898  710410 system_pods.go:74] duration metric: took 4.569905ms to wait for pod list to return data ...
	I1124 09:06:16.014907  710410 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:06:16.017234  710410 default_sa.go:45] found service account: "default"
	I1124 09:06:16.017256  710410 default_sa.go:55] duration metric: took 2.341243ms for default service account to be created ...
	I1124 09:06:16.017265  710410 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:06:16.020426  710410 system_pods.go:86] 9 kube-system pods found
	I1124 09:06:16.020482  710410 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:16.020495  710410 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:06:16.020506  710410 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:06:16.020514  710410 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:06:16.020525  710410 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:06:16.020536  710410 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:06:16.020544  710410 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:06:16.020555  710410 system_pods.go:89] "metrics-server-5d785b57d4-pd54z" [09e6bd80-a8d1-4b28-b18a-094e3667ef9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:06:16.020569  710410 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:16.020580  710410 system_pods.go:126] duration metric: took 3.30745ms to wait for k8s-apps to be running ...
	I1124 09:06:16.020593  710410 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:06:16.020644  710410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:06:16.037995  710410 system_svc.go:56] duration metric: took 17.390664ms WaitForService to wait for kubelet
	I1124 09:06:16.038027  710410 kubeadm.go:587] duration metric: took 4.156587016s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:06:16.038052  710410 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:06:16.040600  710410 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:06:16.040626  710410 node_conditions.go:123] node cpu capacity is 8
	I1124 09:06:16.040644  710410 node_conditions.go:105] duration metric: took 2.58546ms to run NodePressure ...
	I1124 09:06:16.040658  710410 start.go:242] waiting for startup goroutines ...
	I1124 09:06:16.040672  710410 start.go:247] waiting for cluster config update ...
	I1124 09:06:16.040687  710410 start.go:256] writing updated cluster config ...
	I1124 09:06:16.041014  710410 ssh_runner.go:195] Run: rm -f paused
	I1124 09:06:16.045332  710410 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:16.048757  710410 pod_ready.go:83] waiting for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:17.491372  712609 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:06:17.491411  712609 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:06:17.491444  712609 ubuntu.go:190] setting up certificates
	I1124 09:06:17.491502  712609 provision.go:84] configureAuth start
	I1124 09:06:17.491582  712609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-841285
	I1124 09:06:17.509416  712609 provision.go:143] copyHostCerts
	I1124 09:06:17.509497  712609 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:06:17.509513  712609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:06:17.509698  712609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:06:17.509870  712609 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:06:17.509885  712609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:06:17.509930  712609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:06:17.510041  712609 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:06:17.510054  712609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:06:17.510092  712609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:06:17.510183  712609 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.embed-certs-841285 san=[127.0.0.1 192.168.94.2 embed-certs-841285 localhost minikube]
	I1124 09:06:17.622425  712609 provision.go:177] copyRemoteCerts
	I1124 09:06:17.622510  712609 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:06:17.622560  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:17.640855  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:17.744127  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:06:17.764220  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:06:17.782902  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:06:17.801085  712609 provision.go:87] duration metric: took 309.559848ms to configureAuth
	I1124 09:06:17.801119  712609 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:06:17.801320  712609 config.go:182] Loaded profile config "embed-certs-841285": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:06:17.801334  712609 machine.go:97] duration metric: took 3.837283638s to provisionDockerMachine
	I1124 09:06:17.801342  712609 client.go:176] duration metric: took 10.043568101s to LocalClient.Create
	I1124 09:06:17.801360  712609 start.go:167] duration metric: took 10.04363162s to libmachine.API.Create "embed-certs-841285"
	I1124 09:06:17.801369  712609 start.go:293] postStartSetup for "embed-certs-841285" (driver="docker")
	I1124 09:06:17.801378  712609 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:06:17.801431  712609 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:06:17.801498  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:17.820054  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:17.929888  712609 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:06:17.934299  712609 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:06:17.934331  712609 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:06:17.934361  712609 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:06:17.934428  712609 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:06:17.934583  712609 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:06:17.934723  712609 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:06:17.944993  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:17.969913  712609 start.go:296] duration metric: took 168.526621ms for postStartSetup
	I1124 09:06:17.970380  712609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-841285
	I1124 09:06:17.996605  712609 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/config.json ...
	I1124 09:06:17.996936  712609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:06:17.996994  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:18.018740  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:18.128353  712609 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:06:18.133747  712609 start.go:128] duration metric: took 10.377814334s to createHost
	I1124 09:06:18.133774  712609 start.go:83] releasing machines lock for "embed-certs-841285", held for 10.377970244s
	I1124 09:06:18.133876  712609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-841285
	I1124 09:06:18.150815  712609 ssh_runner.go:195] Run: cat /version.json
	I1124 09:06:18.150874  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:18.150943  712609 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:06:18.151022  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:18.169533  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:18.169804  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:18.269428  712609 ssh_runner.go:195] Run: systemctl --version
	I1124 09:06:18.321761  712609 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:06:18.327046  712609 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:06:18.327133  712609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:06:18.352096  712609 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:06:18.352118  712609 start.go:496] detecting cgroup driver to use...
	I1124 09:06:18.352148  712609 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:06:18.352186  712609 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:06:18.366957  712609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:06:18.381693  712609 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:06:18.381752  712609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:06:18.398113  712609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:06:18.415593  712609 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:06:18.502067  712609 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:06:18.601361  712609 docker.go:234] disabling docker service ...
	I1124 09:06:18.601437  712609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:06:18.623658  712609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:06:18.639727  712609 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:06:18.740531  712609 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:06:18.828884  712609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:06:18.842742  712609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:06:18.857868  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:19.175440  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:06:19.187113  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:06:19.196765  712609 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:06:19.196825  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:06:19.208310  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:06:19.218395  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:06:19.228392  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:06:19.237420  712609 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:06:19.245996  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:06:19.255260  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:06:19.264330  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:06:19.273668  712609 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:06:19.281360  712609 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:06:19.289193  712609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:19.364645  712609 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:06:19.463547  712609 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:06:19.463645  712609 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:06:19.467939  712609 start.go:564] Will wait 60s for crictl version
	I1124 09:06:19.467997  712609 ssh_runner.go:195] Run: which crictl
	I1124 09:06:19.472220  712609 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:06:19.499311  712609 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:06:19.499385  712609 ssh_runner.go:195] Run: containerd --version
	I1124 09:06:19.521824  712609 ssh_runner.go:195] Run: containerd --version
	I1124 09:06:19.545239  712609 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.5 ...
	W1124 09:06:18.134936  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:20.633103  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:19.546299  712609 cli_runner.go:164] Run: docker network inspect embed-certs-841285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:06:19.564025  712609 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 09:06:19.568256  712609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:19.579411  712609 kubeadm.go:884] updating cluster {Name:embed-certs-841285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:06:19.579631  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:19.895986  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:20.213647  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:20.537503  712609 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1124 09:06:20.537655  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:20.844686  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:21.154327  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:21.492353  712609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:21.518072  712609 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:21.518095  712609 containerd.go:534] Images already preloaded, skipping extraction
	I1124 09:06:21.518159  712609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:21.543595  712609 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:21.543618  712609 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:06:21.543626  712609 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 containerd true true} ...
	I1124 09:06:21.543712  712609 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-841285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:06:21.543772  712609 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:06:21.574910  712609 cni.go:84] Creating CNI manager for ""
	I1124 09:06:21.574936  712609 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:21.574957  712609 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:06:21.574989  712609 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-841285 NodeName:embed-certs-841285 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:06:21.575132  712609 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-841285"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:06:21.575206  712609 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:06:21.583842  712609 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:06:21.583925  712609 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:06:21.591929  712609 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 09:06:21.604987  712609 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:06:21.621814  712609 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1124 09:06:21.635273  712609 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:06:21.638971  712609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:21.649297  712609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:21.739776  712609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:21.764758  712609 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285 for IP: 192.168.94.2
	I1124 09:06:21.764785  712609 certs.go:195] generating shared ca certs ...
	I1124 09:06:21.764810  712609 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.764986  712609 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:06:21.765033  712609 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:06:21.765044  712609 certs.go:257] generating profile certs ...
	I1124 09:06:21.765102  712609 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.key
	I1124 09:06:21.765114  712609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.crt with IP's: []
	I1124 09:06:21.864750  712609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.crt ...
	I1124 09:06:21.864775  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.crt: {Name:mkc060bfda49863ba613e074874e844ca9a9e70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.864958  712609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.key ...
	I1124 09:06:21.864973  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.key: {Name:mkd5104c3dae3b5f7ae3fa31a87f62c7e96b054a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.865062  712609 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key.97c836bb
	I1124 09:06:21.865080  712609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt.97c836bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 09:06:21.904289  712609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt.97c836bb ...
	I1124 09:06:21.904314  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt.97c836bb: {Name:mkda4f19a07c086a3f5c62a810713f45695762dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.904472  712609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key.97c836bb ...
	I1124 09:06:21.904486  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key.97c836bb: {Name:mk8047fab627a190f575ab4aeb5179696588ecee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.904563  712609 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt.97c836bb -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt
	I1124 09:06:21.904638  712609 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key.97c836bb -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key
	I1124 09:06:21.904692  712609 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.key
	I1124 09:06:21.904707  712609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.crt with IP's: []
	I1124 09:06:21.962903  712609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.crt ...
	I1124 09:06:21.962931  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.crt: {Name:mk2ac14b7d31660738cdb7ddd69ce29a7ebf81c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.963075  712609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.key ...
	I1124 09:06:21.963090  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.key: {Name:mk861035d219c3f6a3f9576912efeef0ad1f2764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.963267  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:06:21.963310  712609 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:06:21.963320  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:06:21.963351  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:06:21.963376  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:06:21.963398  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:06:21.963445  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:21.964070  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:06:21.985738  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:06:22.006551  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:06:22.027007  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:06:22.047398  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:06:22.069149  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:06:22.088426  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:06:22.108672  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:06:22.129917  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:06:22.154617  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:06:22.175965  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:06:22.197185  712609 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:06:22.212418  712609 ssh_runner.go:195] Run: openssl version
	I1124 09:06:22.220166  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:06:22.229632  712609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:22.234267  712609 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:22.234327  712609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:22.279000  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:06:22.289299  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:06:22.299120  712609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:06:22.303121  712609 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:06:22.303174  712609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:06:22.342953  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:06:22.353364  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:06:22.363375  712609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:06:22.367741  712609 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:06:22.367795  712609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:06:22.417612  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:06:22.428519  712609 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:06:22.432272  712609 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:06:22.432340  712609 kubeadm.go:401] StartCluster: {Name:embed-certs-841285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:06:22.432434  712609 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:06:22.432540  712609 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:06:22.465522  712609 cri.go:89] found id: ""
	I1124 09:06:22.465607  712609 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:06:22.474541  712609 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:06:22.483474  712609 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:06:22.483532  712609 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:06:22.492207  712609 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:06:22.492228  712609 kubeadm.go:158] found existing configuration files:
	
	I1124 09:06:22.492272  712609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:06:22.500211  712609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:06:22.500267  712609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:06:22.508026  712609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:06:22.516932  712609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:06:22.516975  712609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:06:22.525873  712609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:06:22.534520  712609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:06:22.534574  712609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:06:22.543311  712609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:06:22.552688  712609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:06:22.552736  712609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:06:22.561991  712609 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:06:22.608133  712609 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:06:22.608234  712609 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:06:22.630269  712609 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:06:22.630387  712609 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:06:22.630455  712609 kubeadm.go:319] OS: Linux
	I1124 09:06:22.630534  712609 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:06:22.630621  712609 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:06:22.630695  712609 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:06:22.630774  712609 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:06:22.630857  712609 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:06:22.630942  712609 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:06:22.631008  712609 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:06:22.631088  712609 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:06:22.699764  712609 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:06:22.699918  712609 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:06:22.700047  712609 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:06:22.705501  712609 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 09:06:18.067100  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:20.554983  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:21.157595  685562 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.08670591s)
	W1124 09:06:21.157642  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 09:06:21.157655  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:21.157675  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:21.191156  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:21.191193  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:21.226292  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:21.226323  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:21.260806  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:21.260836  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:21.304040  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:21.304069  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:21.318332  685562 logs.go:123] Gathering logs for kube-apiserver [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365] ...
	I1124 09:06:21.318357  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:06:21.352772  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:21.352805  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:21.384887  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:21.384916  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:21.413079  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:21.413105  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:21.439058  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:06:21.439086  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:23.966537  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:06:22.635345  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:25.134573  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:22.709334  712609 out.go:252]   - Generating certificates and keys ...
	I1124 09:06:22.709444  712609 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:06:22.709600  712609 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:06:23.287709  712609 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:06:23.440107  712609 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:06:23.712858  712609 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:06:23.920983  712609 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:06:24.576354  712609 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:06:24.576583  712609 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-841285 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 09:06:25.340646  712609 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:06:25.340931  712609 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-841285 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 09:06:25.560248  712609 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:06:25.902615  712609 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:06:26.142353  712609 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:06:26.142521  712609 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:06:26.237440  712609 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:06:26.780742  712609 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:06:26.979631  712609 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:06:27.137635  712609 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:06:27.529861  712609 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:06:27.530452  712609 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:06:27.535586  712609 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 09:06:23.055074  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:25.555355  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:25.205914  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:50432->192.168.76.2:8443: read: connection reset by peer
	I1124 09:06:25.205996  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:25.206062  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:25.239861  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:25.239889  685562 cri.go:89] found id: "1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:06:25.239895  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:25.239901  685562 cri.go:89] found id: ""
	I1124 09:06:25.239912  685562 logs.go:282] 3 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:25.239978  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.244271  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.248558  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.252330  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:25.252389  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:25.280363  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:25.280387  685562 cri.go:89] found id: ""
	I1124 09:06:25.280399  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:25.280496  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.284837  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:25.284895  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:25.311596  685562 cri.go:89] found id: ""
	I1124 09:06:25.311624  685562 logs.go:282] 0 containers: []
	W1124 09:06:25.311635  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:25.311644  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:25.311701  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:25.339841  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:25.339864  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:25.339868  685562 cri.go:89] found id: ""
	I1124 09:06:25.339876  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:25.339949  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.344303  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.348701  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:25.348761  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:25.376996  685562 cri.go:89] found id: ""
	I1124 09:06:25.377021  685562 logs.go:282] 0 containers: []
	W1124 09:06:25.377031  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:25.377040  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:25.377099  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:25.403929  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:25.403953  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:25.403959  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:25.403964  685562 cri.go:89] found id: ""
	I1124 09:06:25.403973  685562 logs.go:282] 3 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:25.404026  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.408011  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.412018  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.415684  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:25.415744  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:25.443570  685562 cri.go:89] found id: ""
	I1124 09:06:25.443597  685562 logs.go:282] 0 containers: []
	W1124 09:06:25.443609  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:25.443617  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:25.443677  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:25.471902  685562 cri.go:89] found id: ""
	I1124 09:06:25.471937  685562 logs.go:282] 0 containers: []
	W1124 09:06:25.471948  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:25.471962  685562 logs.go:123] Gathering logs for kube-apiserver [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365] ...
	I1124 09:06:25.471979  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:06:25.506524  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:25.506556  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:25.545245  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:06:25.545276  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:25.578503  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:25.578540  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:25.616739  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:25.616770  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:25.661551  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:25.661582  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:25.694323  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:25.694356  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:25.709071  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:25.709097  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:25.770429  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:25.770452  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:25.770502  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:25.809925  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:25.809960  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:25.844164  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:25.844194  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:25.872097  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:25.872128  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:25.900658  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:25.900686  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:25.981821  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:25.981857  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:28.514526  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:28.515025  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:28.515093  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:28.515149  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:28.548258  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:28.548286  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:28.548293  685562 cri.go:89] found id: ""
	I1124 09:06:28.548303  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:28.548371  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.553603  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.558175  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:28.558298  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:28.596802  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:28.596826  685562 cri.go:89] found id: ""
	I1124 09:06:28.596838  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:28.596894  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.602045  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:28.602127  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:28.636975  685562 cri.go:89] found id: ""
	I1124 09:06:28.637002  685562 logs.go:282] 0 containers: []
	W1124 09:06:28.637018  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:28.637026  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:28.637089  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:28.672539  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:28.672577  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:28.672584  685562 cri.go:89] found id: ""
	I1124 09:06:28.672594  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:28.672658  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.677886  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.682559  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:28.682629  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:28.714211  685562 cri.go:89] found id: ""
	I1124 09:06:28.714242  685562 logs.go:282] 0 containers: []
	W1124 09:06:28.714253  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:28.714262  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:28.714327  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:28.749220  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:28.749254  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:28.749260  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:28.749264  685562 cri.go:89] found id: ""
	I1124 09:06:28.749274  685562 logs.go:282] 3 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:28.749337  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.754530  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.758971  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.763632  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:28.763702  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:28.800732  685562 cri.go:89] found id: ""
	I1124 09:06:28.800760  685562 logs.go:282] 0 containers: []
	W1124 09:06:28.800771  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:28.800780  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:28.800852  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:28.836364  685562 cri.go:89] found id: ""
	I1124 09:06:28.836401  685562 logs.go:282] 0 containers: []
	W1124 09:06:28.836412  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:28.836425  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:06:28.836508  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:28.865658  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:28.865685  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:28.902970  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:28.903005  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:28.948455  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:28.948504  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:28.983980  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:28.984010  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:29.070849  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:29.070890  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:29.088719  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:29.088760  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:29.152338  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:29.152362  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:29.152385  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:29.189194  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:29.189234  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:29.228399  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:29.228437  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:29.270425  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:29.270488  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:29.310086  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:29.310117  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:29.349346  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:29.349377  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	W1124 09:06:27.135771  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:29.634500  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:27.536998  712609 out.go:252]   - Booting up control plane ...
	I1124 09:06:27.537131  712609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:06:27.537241  712609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:06:27.537890  712609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:06:27.557360  712609 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:06:27.557556  712609 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:06:27.566014  712609 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:06:27.566352  712609 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:06:27.566429  712609 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:06:27.689337  712609 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:06:27.689539  712609 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:06:29.690081  712609 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000905789s
	I1124 09:06:29.695079  712609 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:06:29.695207  712609 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 09:06:29.695315  712609 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:06:29.695440  712609 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:06:30.732893  712609 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.037758856s
	I1124 09:06:31.697336  712609 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.002233718s
	W1124 09:06:28.055145  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:30.055642  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:32.554238  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:33.196787  712609 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501610996s
	I1124 09:06:33.211759  712609 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:06:33.220742  712609 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:06:33.228614  712609 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:06:33.228906  712609 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-841285 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:06:33.236403  712609 kubeadm.go:319] [bootstrap-token] Using token: d17y4k.5oks848f61dz75lb
	I1124 09:06:33.238015  712609 out.go:252]   - Configuring RBAC rules ...
	I1124 09:06:33.238150  712609 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:06:33.240584  712609 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:06:33.245621  712609 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:06:33.247952  712609 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:06:33.251093  712609 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:06:33.253507  712609 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:06:33.601539  712609 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:06:34.016941  712609 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:06:34.602603  712609 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:06:34.603507  712609 kubeadm.go:319] 
	I1124 09:06:34.603600  712609 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:06:34.603615  712609 kubeadm.go:319] 
	I1124 09:06:34.603724  712609 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:06:34.603743  712609 kubeadm.go:319] 
	I1124 09:06:34.603765  712609 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:06:34.603864  712609 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:06:34.603941  712609 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:06:34.603950  712609 kubeadm.go:319] 
	I1124 09:06:34.604020  712609 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:06:34.604028  712609 kubeadm.go:319] 
	I1124 09:06:34.604085  712609 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:06:34.604093  712609 kubeadm.go:319] 
	I1124 09:06:34.604169  712609 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:06:34.604279  712609 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:06:34.604381  712609 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:06:34.604388  712609 kubeadm.go:319] 
	I1124 09:06:34.604520  712609 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:06:34.604605  712609 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:06:34.604620  712609 kubeadm.go:319] 
	I1124 09:06:34.604694  712609 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token d17y4k.5oks848f61dz75lb \
	I1124 09:06:34.604791  712609 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:06:34.604825  712609 kubeadm.go:319] 	--control-plane 
	I1124 09:06:34.604832  712609 kubeadm.go:319] 
	I1124 09:06:34.604926  712609 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:06:34.604934  712609 kubeadm.go:319] 
	I1124 09:06:34.605025  712609 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token d17y4k.5oks848f61dz75lb \
	I1124 09:06:34.605148  712609 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:06:34.607652  712609 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:06:34.607774  712609 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:06:34.607803  712609 cni.go:84] Creating CNI manager for ""
	I1124 09:06:34.607817  712609 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:34.609642  712609 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:06:31.881862  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:31.882338  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:31.882394  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:31.882445  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:31.909213  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:31.909236  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:31.909240  685562 cri.go:89] found id: ""
	I1124 09:06:31.909247  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:31.909291  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:31.913329  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:31.917041  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:31.917093  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:31.943024  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:31.943044  685562 cri.go:89] found id: ""
	I1124 09:06:31.943051  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:31.943103  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:31.947092  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:31.947162  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:31.973577  685562 cri.go:89] found id: ""
	I1124 09:06:31.973599  685562 logs.go:282] 0 containers: []
	W1124 09:06:31.973607  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:31.973613  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:31.973658  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:31.999230  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:31.999254  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:31.999258  685562 cri.go:89] found id: ""
	I1124 09:06:31.999266  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:31.999311  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:32.003300  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:32.006900  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:32.006964  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:32.031766  685562 cri.go:89] found id: ""
	I1124 09:06:32.031793  685562 logs.go:282] 0 containers: []
	W1124 09:06:32.031803  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:32.031810  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:32.031873  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:32.059502  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:32.059525  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:32.059530  685562 cri.go:89] found id: ""
	I1124 09:06:32.059537  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:32.059582  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:32.063421  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:32.067085  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:32.067142  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:32.092390  685562 cri.go:89] found id: ""
	I1124 09:06:32.092412  685562 logs.go:282] 0 containers: []
	W1124 09:06:32.092419  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:32.092428  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:32.092509  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:32.117763  685562 cri.go:89] found id: ""
	I1124 09:06:32.117789  685562 logs.go:282] 0 containers: []
	W1124 09:06:32.117797  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:32.117807  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:32.117818  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:32.150083  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:32.150110  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:32.183530  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:32.183564  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:32.217026  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:32.217054  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:32.296676  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:32.296708  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:32.323952  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:32.323979  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:32.349365  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:32.349389  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:32.393026  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:32.393053  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:32.422866  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:32.422894  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:32.436533  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:32.436560  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:32.491046  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:32.491072  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:32.491085  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:32.521289  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:32.521315  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	W1124 09:06:31.634821  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:33.635206  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:34.610765  712609 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:06:34.615266  712609 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:06:34.615285  712609 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:06:34.628934  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:06:34.828829  712609 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:06:34.828867  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:34.828926  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-841285 minikube.k8s.io/updated_at=2025_11_24T09_06_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=embed-certs-841285 minikube.k8s.io/primary=true
	I1124 09:06:34.840509  712609 ops.go:34] apiserver oom_adj: -16
	I1124 09:06:34.904266  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:35.404241  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:35.905248  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:36.405025  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:36.904407  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:37.404570  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 09:06:35.054174  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:37.054257  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:35.054831  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:35.055205  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:35.055268  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:35.055326  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:35.083391  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:35.083409  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:35.083413  685562 cri.go:89] found id: ""
	I1124 09:06:35.083421  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:35.083510  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.087566  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.091809  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:35.091863  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:35.118108  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:35.118127  685562 cri.go:89] found id: ""
	I1124 09:06:35.118136  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:35.118198  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.122294  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:35.122370  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:35.148804  685562 cri.go:89] found id: ""
	I1124 09:06:35.148824  685562 logs.go:282] 0 containers: []
	W1124 09:06:35.148832  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:35.148837  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:35.148882  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:35.175511  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:35.175534  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:35.175539  685562 cri.go:89] found id: ""
	I1124 09:06:35.175549  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:35.175604  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.179432  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.182990  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:35.183047  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:35.208209  685562 cri.go:89] found id: ""
	I1124 09:06:35.208229  685562 logs.go:282] 0 containers: []
	W1124 09:06:35.208242  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:35.208248  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:35.208294  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:35.234429  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:35.234455  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:35.234506  685562 cri.go:89] found id: ""
	I1124 09:06:35.234515  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:35.234561  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.238390  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.241907  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:35.241961  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:35.269120  685562 cri.go:89] found id: ""
	I1124 09:06:35.269139  685562 logs.go:282] 0 containers: []
	W1124 09:06:35.269151  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:35.269158  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:35.269205  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:35.294592  685562 cri.go:89] found id: ""
	I1124 09:06:35.294615  685562 logs.go:282] 0 containers: []
	W1124 09:06:35.294624  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:35.294637  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:35.294650  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:35.338717  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:35.338746  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:35.369496  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:35.369531  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:35.400289  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:35.400316  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:35.436787  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:35.436819  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:35.473996  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:35.474023  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:35.500945  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:35.500968  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:35.536390  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:35.536420  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:35.620833  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:35.620877  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:35.637934  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:35.637967  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:35.698091  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:35.698115  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:35.698133  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:35.727855  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:35.727886  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:38.263143  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:38.263700  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:38.263765  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:38.263829  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:38.292856  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:38.292878  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:38.292883  685562 cri.go:89] found id: ""
	I1124 09:06:38.292891  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:38.292948  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.297143  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.301133  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:38.301199  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:38.328125  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:38.328156  685562 cri.go:89] found id: ""
	I1124 09:06:38.328169  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:38.328229  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.332380  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:38.332445  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:38.358808  685562 cri.go:89] found id: ""
	I1124 09:06:38.358835  685562 logs.go:282] 0 containers: []
	W1124 09:06:38.358846  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:38.358854  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:38.358919  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:38.385012  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:38.385037  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:38.385042  685562 cri.go:89] found id: ""
	I1124 09:06:38.385050  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:38.385112  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.389205  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.392855  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:38.392906  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:38.419726  685562 cri.go:89] found id: ""
	I1124 09:06:38.419758  685562 logs.go:282] 0 containers: []
	W1124 09:06:38.419770  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:38.419778  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:38.419836  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:38.449557  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:38.449576  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:38.449579  685562 cri.go:89] found id: ""
	I1124 09:06:38.449588  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:38.449635  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.454052  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.458515  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:38.458573  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:38.487500  685562 cri.go:89] found id: ""
	I1124 09:06:38.487529  685562 logs.go:282] 0 containers: []
	W1124 09:06:38.487540  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:38.487549  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:38.487614  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:38.514178  685562 cri.go:89] found id: ""
	I1124 09:06:38.514204  685562 logs.go:282] 0 containers: []
	W1124 09:06:38.514212  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:38.514223  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:38.514233  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:38.574230  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:38.574271  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:38.574290  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:38.618314  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:38.618352  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:38.649077  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:38.649113  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:38.687707  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:38.687738  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:38.731520  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:38.731563  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:38.816355  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:38.816394  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:38.848420  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:38.848447  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:38.883348  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:38.883378  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:38.918351  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:38.918392  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:38.948723  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:38.948764  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:38.985359  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:38.985389  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:37.905005  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:38.405201  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:38.904881  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:39.404418  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:39.905009  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:39.971615  712609 kubeadm.go:1114] duration metric: took 5.142792682s to wait for elevateKubeSystemPrivileges
	I1124 09:06:39.971652  712609 kubeadm.go:403] duration metric: took 17.539316867s to StartCluster
	I1124 09:06:39.971677  712609 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:39.971761  712609 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:39.974117  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:39.974376  712609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:06:39.974397  712609 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:06:39.974479  712609 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:06:39.974582  712609 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-841285"
	I1124 09:06:39.974603  712609 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-841285"
	I1124 09:06:39.974635  712609 host.go:66] Checking if "embed-certs-841285" exists ...
	I1124 09:06:39.974658  712609 config.go:182] Loaded profile config "embed-certs-841285": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:06:39.974783  712609 addons.go:70] Setting default-storageclass=true in profile "embed-certs-841285"
	I1124 09:06:39.974821  712609 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-841285"
	I1124 09:06:39.975105  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:39.975155  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:39.980273  712609 out.go:179] * Verifying Kubernetes components...
	I1124 09:06:39.981373  712609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:40.002669  712609 addons.go:239] Setting addon default-storageclass=true in "embed-certs-841285"
	I1124 09:06:40.002703  712609 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:06:40.002722  712609 host.go:66] Checking if "embed-certs-841285" exists ...
	I1124 09:06:40.003218  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:40.004007  712609 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:40.004029  712609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:06:40.004085  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:40.031263  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:40.033666  712609 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:40.033688  712609 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:06:40.033756  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:40.055874  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:40.076508  712609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:06:40.128368  712609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:40.151264  712609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:40.174106  712609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:40.246855  712609 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 09:06:40.249725  712609 node_ready.go:35] waiting up to 6m0s for node "embed-certs-841285" to be "Ready" ...
	I1124 09:06:40.462156  712609 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 09:06:36.134701  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:38.634083  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:40.463087  712609 addons.go:530] duration metric: took 488.631539ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:06:40.752073  712609 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-841285" context rescaled to 1 replicas
	W1124 09:06:42.252637  712609 node_ready.go:57] node "embed-certs-841285" has "Ready":"False" status (will retry)
	W1124 09:06:39.054512  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:41.554718  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:41.500869  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:41.501361  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:41.501432  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:41.501525  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:41.529135  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:41.529157  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:41.529162  685562 cri.go:89] found id: ""
	I1124 09:06:41.529170  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:41.529217  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.533428  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.537312  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:41.537378  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:41.565599  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:41.565621  685562 cri.go:89] found id: ""
	I1124 09:06:41.565631  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:41.565677  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.569790  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:41.569850  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:41.596873  685562 cri.go:89] found id: ""
	I1124 09:06:41.596902  685562 logs.go:282] 0 containers: []
	W1124 09:06:41.596910  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:41.596918  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:41.596982  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:41.623993  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:41.624016  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:41.624023  685562 cri.go:89] found id: ""
	I1124 09:06:41.624034  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:41.624092  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.628556  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.633200  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:41.633273  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:41.662861  685562 cri.go:89] found id: ""
	I1124 09:06:41.662887  685562 logs.go:282] 0 containers: []
	W1124 09:06:41.662898  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:41.662906  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:41.662971  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:41.690938  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:41.690959  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:41.690964  685562 cri.go:89] found id: ""
	I1124 09:06:41.690972  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:41.691024  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.695206  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.699275  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:41.699354  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:41.726057  685562 cri.go:89] found id: ""
	I1124 09:06:41.726084  685562 logs.go:282] 0 containers: []
	W1124 09:06:41.726093  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:41.726102  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:41.726160  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:41.753859  685562 cri.go:89] found id: ""
	I1124 09:06:41.753884  685562 logs.go:282] 0 containers: []
	W1124 09:06:41.753895  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:41.753908  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:41.753923  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:41.813479  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:41.813506  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:41.813530  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:41.848937  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:41.848968  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:41.878521  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:41.878548  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:41.913216  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:41.913249  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:41.940651  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:41.940681  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:41.985818  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:41.985863  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:42.070550  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:42.070588  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:42.103179  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:42.103207  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:42.135695  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:42.135723  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:42.167693  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:42.167721  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:42.199176  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:42.199214  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:44.714754  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:44.715204  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:44.715275  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:44.715339  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:44.742930  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:44.742954  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:44.742960  685562 cri.go:89] found id: ""
	I1124 09:06:44.742970  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:44.743020  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.747098  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.750940  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:44.751001  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:44.777988  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:44.778009  685562 cri.go:89] found id: ""
	I1124 09:06:44.778018  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:44.778072  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.781793  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:44.781851  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:44.807424  685562 cri.go:89] found id: ""
	I1124 09:06:44.807454  685562 logs.go:282] 0 containers: []
	W1124 09:06:44.807478  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:44.807496  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:44.807554  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:44.833894  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:44.833917  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:44.833923  685562 cri.go:89] found id: ""
	I1124 09:06:44.833932  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:44.833991  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.837845  685562 ssh_runner.go:195] Run: which crictl
	W1124 09:06:41.134407  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:43.633885  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:44.253048  712609 node_ready.go:57] node "embed-certs-841285" has "Ready":"False" status (will retry)
	W1124 09:06:46.753243  712609 node_ready.go:57] node "embed-certs-841285" has "Ready":"False" status (will retry)
	W1124 09:06:43.554785  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:46.054013  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:44.841712  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:44.841768  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:44.867127  685562 cri.go:89] found id: ""
	I1124 09:06:44.867152  685562 logs.go:282] 0 containers: []
	W1124 09:06:44.867163  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:44.867171  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:44.867226  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:44.893139  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:44.893161  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:44.893165  685562 cri.go:89] found id: ""
	I1124 09:06:44.893173  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:44.893225  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.897049  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.900623  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:44.900689  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:44.928422  685562 cri.go:89] found id: ""
	I1124 09:06:44.928453  685562 logs.go:282] 0 containers: []
	W1124 09:06:44.928478  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:44.928493  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:44.928555  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:44.955528  685562 cri.go:89] found id: ""
	I1124 09:06:44.955553  685562 logs.go:282] 0 containers: []
	W1124 09:06:44.955562  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:44.955572  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:44.955585  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:44.969974  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:44.970010  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:45.027796  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:45.027825  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:45.027844  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:45.059560  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:45.059589  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:45.091480  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:45.091510  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:45.119118  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:45.119148  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:45.151248  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:45.151276  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:45.182411  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:45.182439  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:45.226121  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:45.226153  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:45.310078  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:45.310107  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:45.342167  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:45.342197  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:45.369846  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:45.369882  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:47.899244  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:47.899692  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:47.899758  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:47.899824  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:47.929105  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:47.929131  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:47.929138  685562 cri.go:89] found id: ""
	I1124 09:06:47.929148  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:47.929208  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:47.933441  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:47.937325  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:47.937388  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:47.963580  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:47.963607  685562 cri.go:89] found id: ""
	I1124 09:06:47.963617  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:47.963690  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:47.968101  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:47.968172  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:47.996024  685562 cri.go:89] found id: ""
	I1124 09:06:47.996048  685562 logs.go:282] 0 containers: []
	W1124 09:06:47.996056  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:47.996065  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:47.996125  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:48.023413  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:48.023433  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:48.023436  685562 cri.go:89] found id: ""
	I1124 09:06:48.023445  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:48.023525  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:48.027692  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:48.031318  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:48.031395  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:48.059181  685562 cri.go:89] found id: ""
	I1124 09:06:48.059208  685562 logs.go:282] 0 containers: []
	W1124 09:06:48.059219  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:48.059227  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:48.059296  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:48.086294  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:48.086321  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:48.086327  685562 cri.go:89] found id: ""
	I1124 09:06:48.086335  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:48.086400  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:48.090814  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:48.095211  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:48.095280  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:48.122901  685562 cri.go:89] found id: ""
	I1124 09:06:48.122927  685562 logs.go:282] 0 containers: []
	W1124 09:06:48.122939  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:48.122949  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:48.123005  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:48.151342  685562 cri.go:89] found id: ""
	I1124 09:06:48.151383  685562 logs.go:282] 0 containers: []
	W1124 09:06:48.151393  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:48.151404  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:48.151418  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:48.193607  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:48.193643  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:48.226364  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:48.226398  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:48.283581  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:48.283600  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:48.283613  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:48.316978  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:48.317022  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:48.350934  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:48.350963  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:48.385233  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:48.385264  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:48.413799  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:48.413827  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:48.446876  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:48.446904  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:48.526939  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:48.526971  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:48.541619  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:48.541656  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:48.573404  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:48.573436  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	W1124 09:06:48.054454  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:49.554189  710410 pod_ready.go:94] pod "coredns-7d764666f9-b6dpn" is "Ready"
	I1124 09:06:49.554221  710410 pod_ready.go:86] duration metric: took 33.505424734s for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.556706  710410 pod_ready.go:83] waiting for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.560364  710410 pod_ready.go:94] pod "etcd-no-preload-820576" is "Ready"
	I1124 09:06:49.560384  710410 pod_ready.go:86] duration metric: took 3.657273ms for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.562524  710410 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.566017  710410 pod_ready.go:94] pod "kube-apiserver-no-preload-820576" is "Ready"
	I1124 09:06:49.566036  710410 pod_ready.go:86] duration metric: took 3.49074ms for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.567748  710410 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.752582  710410 pod_ready.go:94] pod "kube-controller-manager-no-preload-820576" is "Ready"
	I1124 09:06:49.752618  710410 pod_ready.go:86] duration metric: took 184.846641ms for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.952635  710410 pod_ready.go:83] waiting for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:50.353864  710410 pod_ready.go:94] pod "kube-proxy-vz24l" is "Ready"
	I1124 09:06:50.353965  710410 pod_ready.go:86] duration metric: took 401.30197ms for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:50.551947  710410 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:50.953035  710410 pod_ready.go:94] pod "kube-scheduler-no-preload-820576" is "Ready"
	I1124 09:06:50.953063  710410 pod_ready.go:86] duration metric: took 401.089529ms for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:50.953079  710410 pod_ready.go:40] duration metric: took 34.907713729s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:51.000066  710410 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:06:51.001724  710410 out.go:179] * Done! kubectl is now configured to use "no-preload-820576" cluster and "default" namespace by default
	W1124 09:06:46.136663  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:48.634477  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:49.253434  712609 node_ready.go:57] node "embed-certs-841285" has "Ready":"False" status (will retry)
	I1124 09:06:51.253119  712609 node_ready.go:49] node "embed-certs-841285" is "Ready"
	I1124 09:06:51.253147  712609 node_ready.go:38] duration metric: took 11.003373653s for node "embed-certs-841285" to be "Ready" ...
	I1124 09:06:51.253162  712609 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:06:51.253205  712609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:06:51.267104  712609 api_server.go:72] duration metric: took 11.292674054s to wait for apiserver process to appear ...
	I1124 09:06:51.267131  712609 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:06:51.267149  712609 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:06:51.271589  712609 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 09:06:51.272757  712609 api_server.go:141] control plane version: v1.34.2
	I1124 09:06:51.272785  712609 api_server.go:131] duration metric: took 5.647123ms to wait for apiserver health ...
	I1124 09:06:51.272795  712609 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:06:51.276409  712609 system_pods.go:59] 8 kube-system pods found
	I1124 09:06:51.276447  712609 system_pods.go:61] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:51.276470  712609 system_pods.go:61] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:51.276479  712609 system_pods.go:61] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:51.276491  712609 system_pods.go:61] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:51.276501  712609 system_pods.go:61] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:51.276506  712609 system_pods.go:61] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:51.276519  712609 system_pods.go:61] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:51.276557  712609 system_pods.go:61] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:51.276569  712609 system_pods.go:74] duration metric: took 3.768489ms to wait for pod list to return data ...
	I1124 09:06:51.276577  712609 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:06:51.279038  712609 default_sa.go:45] found service account: "default"
	I1124 09:06:51.279060  712609 default_sa.go:55] duration metric: took 2.474985ms for default service account to be created ...
	I1124 09:06:51.279068  712609 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:06:51.282183  712609 system_pods.go:86] 8 kube-system pods found
	I1124 09:06:51.282218  712609 system_pods.go:89] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:51.282227  712609 system_pods.go:89] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:51.282235  712609 system_pods.go:89] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:51.282241  712609 system_pods.go:89] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:51.282247  712609 system_pods.go:89] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:51.282251  712609 system_pods.go:89] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:51.282257  712609 system_pods.go:89] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:51.282264  712609 system_pods.go:89] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:51.282297  712609 retry.go:31] will retry after 197.083401ms: missing components: kube-dns
	I1124 09:06:51.482726  712609 system_pods.go:86] 8 kube-system pods found
	I1124 09:06:51.482756  712609 system_pods.go:89] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:51.482761  712609 system_pods.go:89] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:51.482767  712609 system_pods.go:89] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:51.482771  712609 system_pods.go:89] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:51.482775  712609 system_pods.go:89] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:51.482778  712609 system_pods.go:89] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:51.482782  712609 system_pods.go:89] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:51.482786  712609 system_pods.go:89] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:51.482801  712609 retry.go:31] will retry after 362.97691ms: missing components: kube-dns
	I1124 09:06:51.850095  712609 system_pods.go:86] 8 kube-system pods found
	I1124 09:06:51.850126  712609 system_pods.go:89] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:51.850132  712609 system_pods.go:89] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:51.850138  712609 system_pods.go:89] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:51.850142  712609 system_pods.go:89] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:51.850148  712609 system_pods.go:89] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:51.850151  712609 system_pods.go:89] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:51.850156  712609 system_pods.go:89] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:51.850170  712609 system_pods.go:89] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:51.850192  712609 retry.go:31] will retry after 480.664538ms: missing components: kube-dns
	I1124 09:06:52.335518  712609 system_pods.go:86] 8 kube-system pods found
	I1124 09:06:52.335548  712609 system_pods.go:89] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Running
	I1124 09:06:52.335557  712609 system_pods.go:89] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:52.335562  712609 system_pods.go:89] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:52.335567  712609 system_pods.go:89] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:52.335573  712609 system_pods.go:89] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:52.335578  712609 system_pods.go:89] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:52.335584  712609 system_pods.go:89] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:52.335588  712609 system_pods.go:89] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Running
	I1124 09:06:52.335599  712609 system_pods.go:126] duration metric: took 1.056524192s to wait for k8s-apps to be running ...
	I1124 09:06:52.335610  712609 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:06:52.335668  712609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:06:52.348782  712609 system_svc.go:56] duration metric: took 13.164048ms WaitForService to wait for kubelet
	I1124 09:06:52.348806  712609 kubeadm.go:587] duration metric: took 12.374379771s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:06:52.348823  712609 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:06:52.351516  712609 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:06:52.351546  712609 node_conditions.go:123] node cpu capacity is 8
	I1124 09:06:52.351563  712609 node_conditions.go:105] duration metric: took 2.735404ms to run NodePressure ...
	I1124 09:06:52.351581  712609 start.go:242] waiting for startup goroutines ...
	I1124 09:06:52.351595  712609 start.go:247] waiting for cluster config update ...
	I1124 09:06:52.351612  712609 start.go:256] writing updated cluster config ...
	I1124 09:06:52.351933  712609 ssh_runner.go:195] Run: rm -f paused
	I1124 09:06:52.355685  712609 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:52.359005  712609 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pj9dj" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.362797  712609 pod_ready.go:94] pod "coredns-66bc5c9577-pj9dj" is "Ready"
	I1124 09:06:52.362820  712609 pod_ready.go:86] duration metric: took 3.79319ms for pod "coredns-66bc5c9577-pj9dj" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.364555  712609 pod_ready.go:83] waiting for pod "etcd-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.367975  712609 pod_ready.go:94] pod "etcd-embed-certs-841285" is "Ready"
	I1124 09:06:52.367994  712609 pod_ready.go:86] duration metric: took 3.418324ms for pod "etcd-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.369845  712609 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.373364  712609 pod_ready.go:94] pod "kube-apiserver-embed-certs-841285" is "Ready"
	I1124 09:06:52.373385  712609 pod_ready.go:86] duration metric: took 3.516894ms for pod "kube-apiserver-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.375033  712609 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.134280  709503 pod_ready.go:94] pod "coredns-5dd5756b68-vxxnm" is "Ready"
	I1124 09:06:51.134309  709503 pod_ready.go:86] duration metric: took 37.505689734s for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.137872  709503 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.143048  709503 pod_ready.go:94] pod "etcd-old-k8s-version-128377" is "Ready"
	I1124 09:06:51.143074  709503 pod_ready.go:86] duration metric: took 5.175259ms for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.146283  709503 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.151227  709503 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-128377" is "Ready"
	I1124 09:06:51.151255  709503 pod_ready.go:86] duration metric: took 4.946885ms for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.154486  709503 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.333825  709503 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-128377" is "Ready"
	I1124 09:06:51.333851  709503 pod_ready.go:86] duration metric: took 179.341709ms for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.535398  709503 pod_ready.go:83] waiting for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.933689  709503 pod_ready.go:94] pod "kube-proxy-fpbs2" is "Ready"
	I1124 09:06:51.933722  709503 pod_ready.go:86] duration metric: took 398.293307ms for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.133891  709503 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.533145  709503 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-128377" is "Ready"
	I1124 09:06:52.533173  709503 pod_ready.go:86] duration metric: took 399.255408ms for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.533185  709503 pod_ready.go:40] duration metric: took 38.910563367s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:52.577376  709503 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:06:52.578870  709503 out.go:203] 
	W1124 09:06:52.579914  709503 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:06:52.580924  709503 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:06:52.581923  709503 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-128377" cluster and "default" namespace by default
	I1124 09:06:52.759728  712609 pod_ready.go:94] pod "kube-controller-manager-embed-certs-841285" is "Ready"
	I1124 09:06:52.759755  712609 pod_ready.go:86] duration metric: took 384.703934ms for pod "kube-controller-manager-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.959669  712609 pod_ready.go:83] waiting for pod "kube-proxy-fnp4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:53.359988  712609 pod_ready.go:94] pod "kube-proxy-fnp4m" is "Ready"
	I1124 09:06:53.360015  712609 pod_ready.go:86] duration metric: took 400.321858ms for pod "kube-proxy-fnp4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:53.560301  712609 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:53.959937  712609 pod_ready.go:94] pod "kube-scheduler-embed-certs-841285" is "Ready"
	I1124 09:06:53.959964  712609 pod_ready.go:86] duration metric: took 399.640947ms for pod "kube-scheduler-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:53.959975  712609 pod_ready.go:40] duration metric: took 1.604258428s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:54.004555  712609 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:06:54.006291  712609 out.go:179] * Done! kubectl is now configured to use "embed-certs-841285" cluster and "default" namespace by default
	I1124 09:06:51.101685  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:51.102112  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:51.102174  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:51.102227  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:51.135040  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:51.135065  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:51.135071  685562 cri.go:89] found id: ""
	I1124 09:06:51.135081  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:51.135148  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.140404  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.144856  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:51.144940  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:51.180635  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:51.180660  685562 cri.go:89] found id: ""
	I1124 09:06:51.180673  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:51.180732  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.187022  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:51.187093  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:51.215838  685562 cri.go:89] found id: ""
	I1124 09:06:51.215863  685562 logs.go:282] 0 containers: []
	W1124 09:06:51.215871  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:51.215877  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:51.215933  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:51.244066  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:51.244094  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:51.244100  685562 cri.go:89] found id: ""
	I1124 09:06:51.244109  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:51.244178  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.248240  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.252274  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:51.252342  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:51.285805  685562 cri.go:89] found id: ""
	I1124 09:06:51.285828  685562 logs.go:282] 0 containers: []
	W1124 09:06:51.285838  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:51.285847  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:51.285906  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:51.323489  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:51.323527  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:51.323533  685562 cri.go:89] found id: ""
	I1124 09:06:51.323543  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:51.323604  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.328663  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.333540  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:51.333610  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:51.362894  685562 cri.go:89] found id: ""
	I1124 09:06:51.362922  685562 logs.go:282] 0 containers: []
	W1124 09:06:51.362932  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:51.362941  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:51.363008  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:51.394531  685562 cri.go:89] found id: ""
	I1124 09:06:51.394556  685562 logs.go:282] 0 containers: []
	W1124 09:06:51.394566  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:51.394580  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:51.394599  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:51.475738  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:51.475775  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:51.491643  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:51.491678  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:51.532760  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:51.532799  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:51.569840  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:51.569885  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:51.614611  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:51.614657  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:51.649935  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:51.649970  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:51.697040  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:51.697082  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:51.758985  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:51.759012  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:51.759029  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:51.791554  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:51.791583  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:51.826807  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:51.826843  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:51.870472  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:51.870507  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:54.404826  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:54.405255  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:54.405323  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:54.405386  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:54.433970  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:54.433998  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:54.434003  685562 cri.go:89] found id: ""
	I1124 09:06:54.434012  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:54.434075  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.438414  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.442166  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:54.442238  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:54.468667  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:54.468694  685562 cri.go:89] found id: ""
	I1124 09:06:54.468706  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:54.468766  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.472777  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:54.472838  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:54.498949  685562 cri.go:89] found id: ""
	I1124 09:06:54.498975  685562 logs.go:282] 0 containers: []
	W1124 09:06:54.498985  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:54.498993  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:54.499054  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:54.529848  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:54.529868  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:54.529871  685562 cri.go:89] found id: ""
	I1124 09:06:54.529879  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:54.529940  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.534397  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.538638  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:54.538709  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:54.567281  685562 cri.go:89] found id: ""
	I1124 09:06:54.567310  685562 logs.go:282] 0 containers: []
	W1124 09:06:54.567322  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:54.567332  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:54.567386  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:54.596806  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:54.596836  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:54.596843  685562 cri.go:89] found id: ""
	I1124 09:06:54.596853  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:54.596914  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.601444  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.605871  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:54.605941  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:54.633262  685562 cri.go:89] found id: ""
	I1124 09:06:54.633287  685562 logs.go:282] 0 containers: []
	W1124 09:06:54.633295  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:54.633301  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:54.633350  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:54.660983  685562 cri.go:89] found id: ""
	I1124 09:06:54.661010  685562 logs.go:282] 0 containers: []
	W1124 09:06:54.661020  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:54.661034  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:54.661060  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:54.695211  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:54.695242  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:54.738087  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:54.738118  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:54.768628  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:54.768660  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:54.851230  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:54.851260  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:54.882690  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:54.882718  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:54.915991  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:54.916021  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:54.943256  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:54.943281  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:54.969234  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:54.969270  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:55.001750  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:55.001784  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:55.015657  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:55.015687  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:55.072493  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:55.072512  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:55.072531  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:57.607270  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:57.607779  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:57.607836  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:57.607903  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:57.638496  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:57.638521  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:57.638525  685562 cri.go:89] found id: ""
	I1124 09:06:57.638533  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:57.638588  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.642977  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.646554  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:57.646625  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:57.676323  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:57.676353  685562 cri.go:89] found id: ""
	I1124 09:06:57.676364  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:57.676426  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.680991  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:57.681061  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:57.707542  685562 cri.go:89] found id: ""
	I1124 09:06:57.707573  685562 logs.go:282] 0 containers: []
	W1124 09:06:57.707584  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:57.707592  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:57.707650  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:57.737756  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:57.737782  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:57.737788  685562 cri.go:89] found id: ""
	I1124 09:06:57.737798  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:57.737860  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.742071  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.745921  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:57.745994  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:57.775084  685562 cri.go:89] found id: ""
	I1124 09:06:57.775108  685562 logs.go:282] 0 containers: []
	W1124 09:06:57.775119  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:57.775128  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:57.775200  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:57.803547  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:57.803575  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:57.803580  685562 cri.go:89] found id: ""
	I1124 09:06:57.803592  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:57.803656  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.808035  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.811815  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:57.811877  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:57.838909  685562 cri.go:89] found id: ""
	I1124 09:06:57.838941  685562 logs.go:282] 0 containers: []
	W1124 09:06:57.838953  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:57.838961  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:57.839023  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:57.867727  685562 cri.go:89] found id: ""
	I1124 09:06:57.867752  685562 logs.go:282] 0 containers: []
	W1124 09:06:57.867765  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:57.867778  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:57.867794  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:57.902109  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:57.902140  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:57.954496  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:57.954531  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:58.040359  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:58.040394  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:58.103496  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:58.103527  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:58.103541  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:58.135471  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:58.135503  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:58.165443  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:58.165510  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:58.196093  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:58.196119  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:58.227441  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:58.227488  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:58.241918  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:58.241949  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:58.275785  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:58.275819  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:58.308006  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:58.308038  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2794c60f1b87d       56cc512116c8f       7 seconds ago       Running             busybox                   0                   cd6e9dd958e1b       busybox                                      default
	5791bcd31b139       52546a367cc9e       12 seconds ago      Running             coredns                   0                   cea257d400b5b       coredns-66bc5c9577-pj9dj                     kube-system
	bb014e8f46371       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   b387d8741a385       storage-provisioner                          kube-system
	70e7d5014d73f       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   7fb43b0ba3148       kindnet-vx768                                kube-system
	aceceb2c284ef       8aa150647e88a       24 seconds ago      Running             kube-proxy                0                   6555090d7ce71       kube-proxy-fnp4m                             kube-system
	d97d24cf8d340       88320b5498ff2       34 seconds ago      Running             kube-scheduler            0                   cba101b3a6b17       kube-scheduler-embed-certs-841285            kube-system
	2ce09b161b5c2       01e8bacf0f500       34 seconds ago      Running             kube-controller-manager   0                   7ea2f34b1722b       kube-controller-manager-embed-certs-841285   kube-system
	f898005685984       a5f569d49a979       34 seconds ago      Running             kube-apiserver            0                   66c80159a2c1b       kube-apiserver-embed-certs-841285            kube-system
	6d95f1561bf17       a3e246e9556e9       34 seconds ago      Running             etcd                      0                   c492c3650c4f1       etcd-embed-certs-841285                      kube-system
	
	
	==> containerd <==
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.601195355Z" level=info msg="CreateContainer within sandbox \"b387d8741a385e01b5c7a73e98f42bf5db21a510fac5123e093fe5421dec8fad\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e\""
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.601846385Z" level=info msg="StartContainer for \"bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e\""
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.602954294Z" level=info msg="connecting to shim bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e" address="unix:///run/containerd/s/27bf57dc6ceb1e46fc50df6038dd3da7382d463a39b8580b6eb4b11174d68acb" protocol=ttrpc version=3
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.605099421Z" level=info msg="Container 5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.610766750Z" level=info msg="CreateContainer within sandbox \"cea257d400b5bb22db6a66b2ebfbc367de9158d5780269a913335780361d1c8c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29\""
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.611340066Z" level=info msg="StartContainer for \"5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29\""
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.612365214Z" level=info msg="connecting to shim 5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29" address="unix:///run/containerd/s/a45e379d451fef72676ccc0f1406be396cadcd8bf5f03b5dc3c8b6207502e546" protocol=ttrpc version=3
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.661054178Z" level=info msg="StartContainer for \"5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29\" returns successfully"
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.661111729Z" level=info msg="StartContainer for \"bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e\" returns successfully"
	Nov 24 09:06:54 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:54.497340066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b0e3c418-2bd8-4d22-8f34-07ae172f4007,Namespace:default,Attempt:0,}"
	Nov 24 09:06:54 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:54.525772270Z" level=info msg="connecting to shim cd6e9dd958e1b877fa364c95cd9afc0cd535d0bca4b2783f855f90f353695930" address="unix:///run/containerd/s/2147e4cab68b4dde9e2aa772b84a3fd7aabb7c0044d0ee461b0ddf18a05ff541" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 09:06:54 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:54.598675142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b0e3c418-2bd8-4d22-8f34-07ae172f4007,Namespace:default,Attempt:0,} returns sandbox id \"cd6e9dd958e1b877fa364c95cd9afc0cd535d0bca4b2783f855f90f353695930\""
	Nov 24 09:06:54 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:54.600749884Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.866538093Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.867018469Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396648"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.868020689Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.869926297Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.870347246Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.269554675s"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.870396814Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.874354233Z" level=info msg="CreateContainer within sandbox \"cd6e9dd958e1b877fa364c95cd9afc0cd535d0bca4b2783f855f90f353695930\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.880517848Z" level=info msg="Container 2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.885938620Z" level=info msg="CreateContainer within sandbox \"cd6e9dd958e1b877fa364c95cd9afc0cd535d0bca4b2783f855f90f353695930\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073\""
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.886543743Z" level=info msg="StartContainer for \"2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073\""
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.888055152Z" level=info msg="connecting to shim 2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073" address="unix:///run/containerd/s/2147e4cab68b4dde9e2aa772b84a3fd7aabb7c0044d0ee461b0ddf18a05ff541" protocol=ttrpc version=3
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.956158525Z" level=info msg="StartContainer for \"2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073\" returns successfully"
	
	
	==> coredns [5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42241 - 35548 "HINFO IN 8163729340161881770.3044721224429617214. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033273972s
	
	
	==> describe nodes <==
	Name:               embed-certs-841285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-841285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=embed-certs-841285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_06_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-841285
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:06:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:07:04 +0000   Mon, 24 Nov 2025 09:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:07:04 +0000   Mon, 24 Nov 2025 09:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:07:04 +0000   Mon, 24 Nov 2025 09:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:07:04 +0000   Mon, 24 Nov 2025 09:06:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-841285
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ebc07106-33bb-498a-bebe-7072c74c7486
	  Boot ID:                    f052cd47-57de-4521-b5fb-139979fdced9
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-pj9dj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-841285                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-vx768                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-841285             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-841285    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-fnp4m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-841285             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node embed-certs-841285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node embed-certs-841285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node embed-certs-841285 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-841285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-841285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-841285 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-841285 event: Registered Node embed-certs-841285 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-841285 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [6d95f1561bf17dce61ba80d159dea00411b59b2a76b869e85c4db0b747e6e052] <==
	{"level":"warn","ts":"2025-11-24T09:06:31.039076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.046575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.058493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.063120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.071121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.079615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.086948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.093637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.099924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.106225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.119544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.132929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.146561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.153181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.159133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.168097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.176030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.182287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.188508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.194669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.200971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.208713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.214933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.231700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.237642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56952","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:07:04 up  3:49,  0 user,  load average: 3.73, 3.61, 10.25
	Linux embed-certs-841285 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [70e7d5014d73fb61f0d19dd479c539b45ebfacffc4d3a9a9e0dbc8e25a4ff258] <==
	I1124 09:06:40.842614       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:06:40.842888       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 09:06:40.843044       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:06:40.843068       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:06:40.843102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:06:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:06:41.044928       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:06:41.044994       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:06:41.045304       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:06:41.045371       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:06:41.442524       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:06:41.442574       1 metrics.go:72] Registering metrics
	I1124 09:06:41.442686       1 controller.go:711] "Syncing nftables rules"
	I1124 09:06:51.047556       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:06:51.047636       1 main.go:301] handling current node
	I1124 09:07:01.046751       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:07:01.046784       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f898005685984dc4556869a93c75316cdf14d3c6467c0e990707fdb33212bf16] <==
	I1124 09:06:31.741773       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:06:31.744875       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 09:06:31.746219       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:06:31.746263       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:06:31.754918       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:06:31.755769       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:06:31.932098       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:06:32.644786       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:06:32.648495       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:06:32.648514       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:06:33.062909       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:06:33.096718       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:06:33.147952       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:06:33.153388       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 09:06:33.154337       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:06:33.158558       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:06:33.669791       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:06:34.006361       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:06:34.016072       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:06:34.023841       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:06:38.672578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:06:39.621392       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:06:39.722996       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:06:39.726414       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 09:07:03.288166       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:58428: use of closed network connection
	
	
	==> kube-controller-manager [2ce09b161b5c24b322e72a291e6d0c4e6fff790b91ca66e60518ed811ec018de] <==
	I1124 09:06:38.649143       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-841285" podCIDRs=["10.244.0.0/24"]
	I1124 09:06:38.669149       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 09:06:38.669170       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:06:38.669190       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:06:38.669257       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 09:06:38.669276       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 09:06:38.669294       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:06:38.669314       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:06:38.669335       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:06:38.669261       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:06:38.669369       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:06:38.669368       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 09:06:38.669441       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-841285"
	I1124 09:06:38.669532       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:06:38.669591       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:06:38.669672       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 09:06:38.669955       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:06:38.670056       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:06:38.670092       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:06:38.670155       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:06:38.670372       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:06:38.672210       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:06:38.676913       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:06:38.698131       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:06:53.689391       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [aceceb2c284ef07de874eba9caa9408bb0f88b56e8227e343a08ec26fb375bf7] <==
	I1124 09:06:40.335788       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:06:40.406183       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:06:40.507249       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:06:40.507284       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 09:06:40.507401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:06:40.532334       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:06:40.532404       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:06:40.538247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:06:40.538649       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:06:40.538677       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:06:40.540090       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:06:40.540110       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:06:40.540226       1 config.go:200] "Starting service config controller"
	I1124 09:06:40.540298       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:06:40.540391       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:06:40.540271       1 config.go:309] "Starting node config controller"
	I1124 09:06:40.540996       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:06:40.541006       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:06:40.540317       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:06:40.640235       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:06:40.641447       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:06:40.641453       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d97d24cf8d340628a9581ff5edc0ea87945c6edffba8606d442b1e4884d4e7f2] <==
	E1124 09:06:31.695663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:06:31.695702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:06:31.695841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:06:31.695867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:06:31.695924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:06:31.695948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:06:31.696010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:06:31.696091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:06:31.696098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:06:31.695993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:06:31.696215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:06:31.696495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:06:31.696524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:06:31.696743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:06:32.543383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:06:32.564651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:06:32.631555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:06:32.669643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:06:32.741498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:06:32.780408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:06:32.798896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:06:32.878930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:06:32.915329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:06:33.029945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 09:06:35.492553       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: E1124 09:06:34.867929    1454 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-841285\" already exists" pod="kube-system/kube-scheduler-embed-certs-841285"
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: I1124 09:06:34.881371    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-841285" podStartSLOduration=1.8813263839999999 podStartE2EDuration="1.881326384s" podCreationTimestamp="2025-11-24 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:34.881122775 +0000 UTC m=+1.116853867" watchObservedRunningTime="2025-11-24 09:06:34.881326384 +0000 UTC m=+1.117057470"
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: I1124 09:06:34.899370    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-841285" podStartSLOduration=1.899347068 podStartE2EDuration="1.899347068s" podCreationTimestamp="2025-11-24 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:34.889972597 +0000 UTC m=+1.125703687" watchObservedRunningTime="2025-11-24 09:06:34.899347068 +0000 UTC m=+1.135078156"
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: I1124 09:06:34.906717    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-841285" podStartSLOduration=1.906697591 podStartE2EDuration="1.906697591s" podCreationTimestamp="2025-11-24 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:34.899526679 +0000 UTC m=+1.135257767" watchObservedRunningTime="2025-11-24 09:06:34.906697591 +0000 UTC m=+1.142428662"
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: I1124 09:06:34.906882    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-841285" podStartSLOduration=1.906872854 podStartE2EDuration="1.906872854s" podCreationTimestamp="2025-11-24 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:34.906868153 +0000 UTC m=+1.142599233" watchObservedRunningTime="2025-11-24 09:06:34.906872854 +0000 UTC m=+1.142603943"
	Nov 24 09:06:38 embed-certs-841285 kubelet[1454]: I1124 09:06:38.689859    1454 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:06:38 embed-certs-841285 kubelet[1454]: I1124 09:06:38.690619    1454 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670542    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a9ad80-225d-4155-82db-5c9e2b99d56c-xtables-lock\") pod \"kube-proxy-fnp4m\" (UID: \"27a9ad80-225d-4155-82db-5c9e2b99d56c\") " pod="kube-system/kube-proxy-fnp4m"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670590    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a9ad80-225d-4155-82db-5c9e2b99d56c-lib-modules\") pod \"kube-proxy-fnp4m\" (UID: \"27a9ad80-225d-4155-82db-5c9e2b99d56c\") " pod="kube-system/kube-proxy-fnp4m"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670617    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v489d\" (UniqueName: \"kubernetes.io/projected/27a9ad80-225d-4155-82db-5c9e2b99d56c-kube-api-access-v489d\") pod \"kube-proxy-fnp4m\" (UID: \"27a9ad80-225d-4155-82db-5c9e2b99d56c\") " pod="kube-system/kube-proxy-fnp4m"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670658    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1815dcaa-34e5-492f-9cc5-89725e8bdd87-cni-cfg\") pod \"kindnet-vx768\" (UID: \"1815dcaa-34e5-492f-9cc5-89725e8bdd87\") " pod="kube-system/kindnet-vx768"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670690    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1815dcaa-34e5-492f-9cc5-89725e8bdd87-xtables-lock\") pod \"kindnet-vx768\" (UID: \"1815dcaa-34e5-492f-9cc5-89725e8bdd87\") " pod="kube-system/kindnet-vx768"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670713    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1815dcaa-34e5-492f-9cc5-89725e8bdd87-lib-modules\") pod \"kindnet-vx768\" (UID: \"1815dcaa-34e5-492f-9cc5-89725e8bdd87\") " pod="kube-system/kindnet-vx768"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670736    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ht4h\" (UniqueName: \"kubernetes.io/projected/1815dcaa-34e5-492f-9cc5-89725e8bdd87-kube-api-access-2ht4h\") pod \"kindnet-vx768\" (UID: \"1815dcaa-34e5-492f-9cc5-89725e8bdd87\") " pod="kube-system/kindnet-vx768"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670792    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27a9ad80-225d-4155-82db-5c9e2b99d56c-kube-proxy\") pod \"kube-proxy-fnp4m\" (UID: \"27a9ad80-225d-4155-82db-5c9e2b99d56c\") " pod="kube-system/kube-proxy-fnp4m"
	Nov 24 09:06:40 embed-certs-841285 kubelet[1454]: I1124 09:06:40.895017    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vx768" podStartSLOduration=1.894996549 podStartE2EDuration="1.894996549s" podCreationTimestamp="2025-11-24 09:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:40.885621976 +0000 UTC m=+7.121353064" watchObservedRunningTime="2025-11-24 09:06:40.894996549 +0000 UTC m=+7.130727638"
	Nov 24 09:06:40 embed-certs-841285 kubelet[1454]: I1124 09:06:40.903392    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fnp4m" podStartSLOduration=1.9033737990000001 podStartE2EDuration="1.903373799s" podCreationTimestamp="2025-11-24 09:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:40.894962969 +0000 UTC m=+7.130694058" watchObservedRunningTime="2025-11-24 09:06:40.903373799 +0000 UTC m=+7.139104893"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.149563    1454 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.258701    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqds\" (UniqueName: \"kubernetes.io/projected/a842c350-8d9a-4e1c-a3d6-286e8dd975f8-kube-api-access-2jqds\") pod \"storage-provisioner\" (UID: \"a842c350-8d9a-4e1c-a3d6-286e8dd975f8\") " pod="kube-system/storage-provisioner"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.258767    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a842c350-8d9a-4e1c-a3d6-286e8dd975f8-tmp\") pod \"storage-provisioner\" (UID: \"a842c350-8d9a-4e1c-a3d6-286e8dd975f8\") " pod="kube-system/storage-provisioner"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.258797    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb3ca53-e377-4bb6-ac0b-0d30d279be3f-config-volume\") pod \"coredns-66bc5c9577-pj9dj\" (UID: \"aeb3ca53-e377-4bb6-ac0b-0d30d279be3f\") " pod="kube-system/coredns-66bc5c9577-pj9dj"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.258819    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bthj\" (UniqueName: \"kubernetes.io/projected/aeb3ca53-e377-4bb6-ac0b-0d30d279be3f-kube-api-access-8bthj\") pod \"coredns-66bc5c9577-pj9dj\" (UID: \"aeb3ca53-e377-4bb6-ac0b-0d30d279be3f\") " pod="kube-system/coredns-66bc5c9577-pj9dj"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.912441    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pj9dj" podStartSLOduration=12.912418824 podStartE2EDuration="12.912418824s" podCreationTimestamp="2025-11-24 09:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:51.912282127 +0000 UTC m=+18.148013218" watchObservedRunningTime="2025-11-24 09:06:51.912418824 +0000 UTC m=+18.148149913"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.921130    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.921107228 podStartE2EDuration="11.921107228s" podCreationTimestamp="2025-11-24 09:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:51.921073137 +0000 UTC m=+18.156804227" watchObservedRunningTime="2025-11-24 09:06:51.921107228 +0000 UTC m=+18.156838320"
	Nov 24 09:06:54 embed-certs-841285 kubelet[1454]: I1124 09:06:54.276244    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjpn\" (UniqueName: \"kubernetes.io/projected/b0e3c418-2bd8-4d22-8f34-07ae172f4007-kube-api-access-jgjpn\") pod \"busybox\" (UID: \"b0e3c418-2bd8-4d22-8f34-07ae172f4007\") " pod="default/busybox"
	
	
	==> storage-provisioner [bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e] <==
	I1124 09:06:51.672130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:06:51.683385       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:06:51.683455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:06:51.686218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:51.692635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:06:51.692810       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:06:51.693030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b385703d-3f7e-47f3-bebb-4b78081f4b4c", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-841285_94699be9-2ddd-4f62-90d1-da0627f35948 became leader
	I1124 09:06:51.693655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-841285_94699be9-2ddd-4f62-90d1-da0627f35948!
	W1124 09:06:51.695986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:51.701008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:06:51.794574       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-841285_94699be9-2ddd-4f62-90d1-da0627f35948!
	W1124 09:06:53.704271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:53.708195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:55.711233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:55.714861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:57.718183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:57.722689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:59.726145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:59.730018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:01.733153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:01.736844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:03.741288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:03.746015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-841285 -n embed-certs-841285
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-841285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-841285
helpers_test.go:243: (dbg) docker inspect embed-certs-841285:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c",
	        "Created": "2025-11-24T09:06:13.101374533Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 715473,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:06:13.148755139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c/hosts",
	        "LogPath": "/var/lib/docker/containers/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c/2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c-json.log",
	        "Name": "/embed-certs-841285",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-841285:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-841285",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2818f8831adf3fc47817ecd70509455d5fae47d7720c60a5fc42aca66f6d9c5c",
	                "LowerDir": "/var/lib/docker/overlay2/4a6674e833905d19e86aef234376161d1823017660060b03112f8f644236912e-init/diff:/var/lib/docker/overlay2/a062700147ad5d1f8f2a68f70ba6ad34ea6495dd365bcb265ab17ea27961837b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a6674e833905d19e86aef234376161d1823017660060b03112f8f644236912e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a6674e833905d19e86aef234376161d1823017660060b03112f8f644236912e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a6674e833905d19e86aef234376161d1823017660060b03112f8f644236912e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-841285",
	                "Source": "/var/lib/docker/volumes/embed-certs-841285/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-841285",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-841285",
	                "name.minikube.sigs.k8s.io": "embed-certs-841285",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "67df66d701529e287730cb9bcd494fde3107ff602b70cf44fc90b796050f2eec",
	            "SandboxKey": "/var/run/docker/netns/67df66d70152",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-841285": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "878cc741640bbb1d91d845a9b685d01e89f4e862dc21c645f514f3029b1b1db2",
	                    "EndpointID": "125cc050625ae4fc4055cc1dd357d98e280c0e88627f2bd0be1b123cf15ef39d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e2:99:14:32:8f:dc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-841285",
	                        "2818f8831adf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-841285 -n embed-certs-841285
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-841285 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-841285 logs -n 25: (1.494563422s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-203355 sudo containerd config dump                                                                                                                                                                                                        │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p cilium-203355 sudo crio config                                                                                                                                                                                                                   │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ delete  │ -p cilium-203355                                                                                                                                                                                                                                    │ cilium-203355          │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:04 UTC │
	│ start   │ -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ start   │ -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │ 24 Nov 25 09:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-128377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:05 UTC │
	│ stop    │ -p old-k8s-version-128377 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:06 UTC │
	│ addons  │ enable metrics-server -p no-preload-820576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:05 UTC │
	│ stop    │ -p no-preload-820576 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:06 UTC │
	│ start   │ -p cert-expiration-869306 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-869306 │ jenkins │ v1.37.0 │ 24 Nov 25 09:05 UTC │ 24 Nov 25 09:06 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-128377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ start   │ -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-820576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ start   │ -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ delete  │ -p cert-expiration-869306                                                                                                                                                                                                                           │ cert-expiration-869306 │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ start   │ -p embed-certs-841285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                        │ embed-certs-841285     │ jenkins │ v1.37.0 │ 24 Nov 25 09:06 UTC │ 24 Nov 25 09:06 UTC │
	│ image   │ no-preload-820576 image list --format=json                                                                                                                                                                                                          │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ pause   │ -p no-preload-820576 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ image   │ old-k8s-version-128377 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ pause   │ -p old-k8s-version-128377 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ unpause │ -p no-preload-820576 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-820576      │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │                     │
	│ unpause │ -p old-k8s-version-128377 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-128377 │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:06:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:06:07.483540  712609 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:06:07.483759  712609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:06:07.483768  712609 out.go:374] Setting ErrFile to fd 2...
	I1124 09:06:07.483772  712609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:06:07.484052  712609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:06:07.484663  712609 out.go:368] Setting JSON to false
	I1124 09:06:07.486191  712609 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13703,"bootTime":1763961464,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:06:07.486274  712609 start.go:143] virtualization: kvm guest
	I1124 09:06:07.488217  712609 out.go:179] * [embed-certs-841285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:06:07.489473  712609 notify.go:221] Checking for updates...
	I1124 09:06:07.489482  712609 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:06:07.490660  712609 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:06:07.492212  712609 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:07.497449  712609 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:06:07.498639  712609 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:06:07.499749  712609 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:06:07.501661  712609 config.go:182] Loaded profile config "kubernetes-upgrade-521313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:06:07.501837  712609 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:06:07.501982  712609 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:06:07.502126  712609 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:06:07.531929  712609 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:06:07.532059  712609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:06:07.625894  712609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 09:06:07.609806264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:06:07.626075  712609 docker.go:319] overlay module found
	I1124 09:06:07.628280  712609 out.go:179] * Using the docker driver based on user configuration
	I1124 09:06:07.629359  712609 start.go:309] selected driver: docker
	I1124 09:06:07.629378  712609 start.go:927] validating driver "docker" against <nil>
	I1124 09:06:07.629399  712609 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:06:07.630257  712609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:06:07.714617  712609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 09:06:07.700319261 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:06:07.715055  712609 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 09:06:07.715492  712609 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:06:07.716933  712609 out.go:179] * Using Docker driver with root privileges
	I1124 09:06:07.718370  712609 cni.go:84] Creating CNI manager for ""
	I1124 09:06:07.718503  712609 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:07.718517  712609 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 09:06:07.718614  712609 start.go:353] cluster config:
	{Name:embed-certs-841285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:06:07.720286  712609 out.go:179] * Starting "embed-certs-841285" primary control-plane node in "embed-certs-841285" cluster
	I1124 09:06:07.721255  712609 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 09:06:07.722693  712609 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:06:07.725075  712609 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1124 09:06:07.725141  712609 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1124 09:06:07.725154  712609 cache.go:65] Caching tarball of preloaded images
	I1124 09:06:07.725172  712609 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:06:07.725284  712609 preload.go:238] Found /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 09:06:07.725301  712609 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1124 09:06:07.725442  712609 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/config.json ...
	I1124 09:06:07.725514  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/config.json: {Name:mkf857cbddcb0b21a16751e4fa391cd5aacc43ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:07.755608  712609 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:06:07.755635  712609 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 09:06:07.755649  712609 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:06:07.755689  712609 start.go:360] acquireMachinesLock for embed-certs-841285: {Name:mkeaf1c7c2f33c7fd2227e10c2a6ab7b1478dfe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:07.755790  712609 start.go:364] duration metric: took 74.877µs to acquireMachinesLock for "embed-certs-841285"
	I1124 09:06:07.755822  712609 start.go:93] Provisioning new machine with config: &{Name:embed-certs-841285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:06:07.755914  712609 start.go:125] createHost starting for "" (driver="docker")
	I1124 09:06:03.017927  710410 out.go:252] * Restarting existing docker container for "no-preload-820576" ...
	I1124 09:06:03.018012  710410 cli_runner.go:164] Run: docker start no-preload-820576
	I1124 09:06:03.296314  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:03.340739  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:03.363219  710410 kic.go:430] container "no-preload-820576" state is running.
	I1124 09:06:03.363630  710410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:06:03.382470  710410 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/config.json ...
	I1124 09:06:03.382718  710410 machine.go:94] provisionDockerMachine start ...
	I1124 09:06:03.382805  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:03.402573  710410 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:03.402831  710410 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 09:06:03.402846  710410 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:06:03.403650  710410 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59318->127.0.0.1:33078: read: connection reset by peer
	I1124 09:06:03.620863  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:03.967865  710410 cache.go:107] acquiring lock: {Name:mkbcabeb5a23ff077ffdad64c71e9fe699d94040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967876  710410 cache.go:107] acquiring lock: {Name:mk8023690ce5b18d9a1789b2f878bf92c1381799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967905  710410 cache.go:107] acquiring lock: {Name:mk1d635b72f6d026600360916178f900a450350e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967933  710410 cache.go:107] acquiring lock: {Name:mk92c82896924ab47423467b25ccd98ee4128baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967960  710410 cache.go:107] acquiring lock: {Name:mk6b573bbd33cfc3c3f77668030fb064598572fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.967979  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:06:03.967992  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:06:03.967999  710410 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 158.964µs
	I1124 09:06:03.968002  710410 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 118.312µs
	I1124 09:06:03.968015  710410 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:06:03.968016  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:06:03.967919  710410 cache.go:107] acquiring lock: {Name:mkd74819cb24442927f7fb2cffd47478de40e14c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.968028  710410 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 105.4µs
	I1124 09:06:03.968040  710410 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:06:03.968009  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:06:03.968049  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:06:03.967893  710410 cache.go:107] acquiring lock: {Name:mk7f052905284f586f4f1cf24b8c34cc48e0b85b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.968055  710410 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 195.465µs
	I1124 09:06:03.968031  710410 cache.go:107] acquiring lock: {Name:mkf3a006b133f81ed32779d427a8d0a9b25f9000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:06:03.968056  710410 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 138.45µs
	I1124 09:06:03.968063  710410 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:06:03.968017  710410 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:06:03.968069  710410 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:06:03.968100  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:06:03.968108  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:06:03.968114  710410 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 227.518µs
	I1124 09:06:03.968124  710410 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:06:03.968127  710410 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 161.684µs
	I1124 09:06:03.968144  710410 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:06:03.968151  710410 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:06:03.968152  710410 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 159.681µs
	I1124 09:06:03.968161  710410 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:06:03.968177  710410 cache.go:87] Successfully saved all images to host disk.
	I1124 09:06:06.557723  710410 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:06:06.557765  710410 ubuntu.go:182] provisioning hostname "no-preload-820576"
	I1124 09:06:06.557867  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:06.577599  710410 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:06.577813  710410 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 09:06:06.577826  710410 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-820576 && echo "no-preload-820576" | sudo tee /etc/hostname
	I1124 09:06:06.734573  710410 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-820576
	
	I1124 09:06:06.734721  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:06.754862  710410 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:06.755130  710410 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1124 09:06:06.755162  710410 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820576/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:06:06.920799  710410 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:06:06.920836  710410 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:06:06.920867  710410 ubuntu.go:190] setting up certificates
	I1124 09:06:06.920889  710410 provision.go:84] configureAuth start
	I1124 09:06:06.920981  710410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:06:06.941231  710410 provision.go:143] copyHostCerts
	I1124 09:06:06.941304  710410 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:06:06.941329  710410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:06:06.941399  710410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:06:06.941559  710410 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:06:06.941571  710410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:06:06.941616  710410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:06:06.941718  710410 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:06:06.941733  710410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:06:06.941774  710410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:06:06.941867  710410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.no-preload-820576 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820576]
	I1124 09:06:06.972955  710410 provision.go:177] copyRemoteCerts
	I1124 09:06:06.973028  710410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:06:06.973077  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:06.996308  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.101497  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:06:07.119671  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:06:07.139380  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 09:06:07.159230  710410 provision.go:87] duration metric: took 238.32094ms to configureAuth
	I1124 09:06:07.159268  710410 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:06:07.159536  710410 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:06:07.159564  710410 machine.go:97] duration metric: took 3.776825081s to provisionDockerMachine
	I1124 09:06:07.159576  710410 start.go:293] postStartSetup for "no-preload-820576" (driver="docker")
	I1124 09:06:07.159592  710410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:06:07.159671  710410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:06:07.159728  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:07.179270  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.286516  710410 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:06:07.290562  710410 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:06:07.290599  710410 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:06:07.290610  710410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:06:07.290663  710410 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:06:07.290742  710410 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:06:07.290873  710410 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:06:07.299309  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:07.317122  710410 start.go:296] duration metric: took 157.527884ms for postStartSetup
	I1124 09:06:07.317211  710410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:06:07.317246  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:07.336146  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.438137  710410 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:06:07.443360  710410 fix.go:56] duration metric: took 4.447269608s for fixHost
	I1124 09:06:07.443392  710410 start.go:83] releasing machines lock for "no-preload-820576", held for 4.447325578s
	I1124 09:06:07.443493  710410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820576
	I1124 09:06:07.464550  710410 ssh_runner.go:195] Run: cat /version.json
	I1124 09:06:07.464611  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:07.464648  710410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:06:07.464732  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:07.485402  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.487047  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:07.594978  710410 ssh_runner.go:195] Run: systemctl --version
	I1124 09:06:07.681513  710410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:06:07.688502  710410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:06:07.688582  710410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:06:07.701206  710410 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:06:07.701281  710410 start.go:496] detecting cgroup driver to use...
	I1124 09:06:07.701318  710410 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:06:07.701495  710410 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:06:07.729598  710410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:06:07.750258  710410 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:06:07.750315  710410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:06:07.775934  710410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:06:06.598684  709503 cli_runner.go:164] Run: docker network inspect old-k8s-version-128377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:06:06.617474  709503 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:06:06.622019  709503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:06.633478  709503 kubeadm.go:884] updating cluster {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:06:06.633622  709503 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 09:06:06.633672  709503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:06.661265  709503 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:06.661287  709503 containerd.go:534] Images already preloaded, skipping extraction
	I1124 09:06:06.661334  709503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:06.689156  709503 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:06.689178  709503 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:06:06.689192  709503 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 containerd true true} ...
	I1124 09:06:06.689295  709503 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-128377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:06:06.689357  709503 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:06:06.717670  709503 cni.go:84] Creating CNI manager for ""
	I1124 09:06:06.717695  709503 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:06.717716  709503 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:06:06.717743  709503 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-128377 NodeName:old-k8s-version-128377 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:06:06.717921  709503 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-128377"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:06:06.718016  709503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 09:06:06.726942  709503 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:06:06.727012  709503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:06:06.735521  709503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1124 09:06:06.749766  709503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:06:06.776782  709503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1124 09:06:06.801084  709503 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:06:06.805881  709503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:06.818254  709503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:06.922245  709503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:06.949494  709503 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377 for IP: 192.168.103.2
	I1124 09:06:06.949517  709503 certs.go:195] generating shared ca certs ...
	I1124 09:06:06.949537  709503 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:06.949709  709503 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:06:06.949772  709503 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:06:06.949785  709503 certs.go:257] generating profile certs ...
	I1124 09:06:06.949913  709503 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.key
	I1124 09:06:06.950010  709503 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key.f2d0a0c1
	I1124 09:06:06.950061  709503 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key
	I1124 09:06:06.950193  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:06:06.950232  709503 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:06:06.950247  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:06:06.950291  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:06:06.950335  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:06:06.950367  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:06:06.950428  709503 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:06.951361  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:06:06.972328  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:06:06.997133  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:06:07.017763  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:06:07.042410  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 09:06:07.067015  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 09:06:07.088536  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:06:07.106991  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:06:07.125258  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:06:07.145094  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:06:07.165370  709503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:06:07.186071  709503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:06:07.202024  709503 ssh_runner.go:195] Run: openssl version
	I1124 09:06:07.209376  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:06:07.219680  709503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:06:07.224015  709503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:06:07.224071  709503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:06:07.262906  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:06:07.279541  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:06:07.289657  709503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:07.294353  709503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:07.294414  709503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:07.334199  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:06:07.343587  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:06:07.353579  709503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:06:07.358206  709503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:06:07.358275  709503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:06:07.395934  709503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:06:07.404703  709503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:06:07.408649  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:06:07.445334  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:06:07.488909  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:06:07.546273  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:06:07.608976  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:06:07.680011  709503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:06:07.743611  709503 kubeadm.go:401] StartCluster: {Name:old-k8s-version-128377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-128377 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:06:07.743756  709503 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:06:07.743847  709503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:06:07.805661  709503 cri.go:89] found id: "2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a"
	I1124 09:06:07.805694  709503 cri.go:89] found id: "386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae"
	I1124 09:06:07.805700  709503 cri.go:89] found id: "14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697"
	I1124 09:06:07.805704  709503 cri.go:89] found id: "5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537"
	I1124 09:06:07.805709  709503 cri.go:89] found id: "a7a841ea7303a40b7b557fbe769c57a1562346d875b1853a8a729ad668090cb5"
	I1124 09:06:07.805714  709503 cri.go:89] found id: "a9a5857553e67019e47641c1970bb0d5555afd6b608c94a94501dd485efac0c4"
	I1124 09:06:07.805718  709503 cri.go:89] found id: "818537e08c0605796949e72c73a034b7d5f104ce598d4a12f0ed8bf30de9c646"
	I1124 09:06:07.805722  709503 cri.go:89] found id: "370631aaaf577fb6a343282108f71bb03e72ef6024de9d9f8e2a2eeb7e16e746"
	I1124 09:06:07.805726  709503 cri.go:89] found id: "f5eddecfb179fe94de6b3892600fc1870efa5679c82874d72a3b301753e6f7d4"
	I1124 09:06:07.805736  709503 cri.go:89] found id: "5d9ec22e03b8b0446d34a5b300037519eb0aa0be6b1e6c451907abb271f71839"
	I1124 09:06:07.805740  709503 cri.go:89] found id: "842bd9db2d84b65b054e4b006bfb9c11b98ac3cdcbe13cd821183480cd046d8a"
	I1124 09:06:07.805744  709503 cri.go:89] found id: "8df3112d99751cf0ed66add055e0df50e3c944dbb66b787e2e3ae37efbec7d4e"
	I1124 09:06:07.805748  709503 cri.go:89] found id: ""
	I1124 09:06:07.805800  709503 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 09:06:07.858533  709503 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697","pid":953,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697/rootfs","created":"2025-11-24T09:06:07.767682233Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.0","io.kubernetes.cri.sandbox-id":"ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"97465a4cd4188931f10ea28e1a2907e2"},"owner":"root"},{"ociVersion":
"1.2.1","id":"2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a","pid":969,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a/rootfs","created":"2025-11-24T09:06:07.770209322Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.0","io.kubernetes.cri.sandbox-id":"7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"551952eef6cb6e731333d664adafec03"},"owner":"root"},{"ociVersion":"1.2.1","id":"386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae","pid":952,"status":"
running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae/rootfs","created":"2025-11-24T09:06:07.75785436Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b146c92afb8c14021010a6f689d3581"},"owner":"root"},{"ociVersion":"1.2.1","id":"5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537","pid":938,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c77
6b537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537/rootfs","created":"2025-11-24T09:06:07.752935382Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.0","io.kubernetes.cri.sandbox-id":"94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"869d206dcde1c4f8d5d525ee4860a861"},"owner":"root"},{"ociVersion":"1.2.1","id":"7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066","pid":861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a9ceda96c311eb5009b83f30ee624
3b2d488849704e328dffef8c760fbb8066/rootfs","created":"2025-11-24T09:06:07.629953763Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-128377_551952eef6cb6e731333d664adafec03","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"551952eef6cb6e731333d664adafec03"},"owner":"root"},{"ociVersion":"1.2.1","id":"94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9","pid":812,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9/rootfs","created":"2025-11-24T09:06:07.585036749Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-128377_869d206dcde1c4f8d5d525ee4860a861","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"869d206dcde1c4f8d5d525ee4860a861"},"owner":"root"},{"ociVersion":"1.2.1","id":
"ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b","pid":840,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b/rootfs","created":"2025-11-24T09:06:07.601657583Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-128377_97465a4cd4188931f10ea28e1a2907e2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-ve
rsion-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"97465a4cd4188931f10ea28e1a2907e2"},"owner":"root"},{"ociVersion":"1.2.1","id":"e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf","pid":868,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf/rootfs","created":"2025-11-24T09:06:07.628088181Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8
s-version-128377_1b146c92afb8c14021010a6f689d3581","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-128377","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1b146c92afb8c14021010a6f689d3581"},"owner":"root"}]
	I1124 09:06:07.858785  709503 cri.go:126] list returned 8 containers
	I1124 09:06:07.858815  709503 cri.go:129] container: {ID:14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697 Status:running}
	I1124 09:06:07.858852  709503 cri.go:135] skipping {14fb25e463548893bd8f955087086fc8bd977521886ef75c9d23fec76d610697 running}: state = "running", want "paused"
	I1124 09:06:07.858872  709503 cri.go:129] container: {ID:2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a Status:running}
	I1124 09:06:07.858888  709503 cri.go:135] skipping {2cde3fd3b1fec7bf82db1a556c3f52809087a3ba3436fa7b5d61a127b5a21f8a running}: state = "running", want "paused"
	I1124 09:06:07.858896  709503 cri.go:129] container: {ID:386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae Status:running}
	I1124 09:06:07.858908  709503 cri.go:135] skipping {386284bd736fa410b6ec7b285a702805b8191ae596f733130a95a6b9cdd592ae running}: state = "running", want "paused"
	I1124 09:06:07.858915  709503 cri.go:129] container: {ID:5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537 Status:running}
	I1124 09:06:07.858922  709503 cri.go:135] skipping {5282f1c920eb7ff37391f75191d28585e4d302ce4ec44fb44ce68a88c776b537 running}: state = "running", want "paused"
	I1124 09:06:07.858927  709503 cri.go:129] container: {ID:7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066 Status:running}
	I1124 09:06:07.858944  709503 cri.go:131] skipping 7a9ceda96c311eb5009b83f30ee6243b2d488849704e328dffef8c760fbb8066 - not in ps
	I1124 09:06:07.858958  709503 cri.go:129] container: {ID:94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9 Status:running}
	I1124 09:06:07.858965  709503 cri.go:131] skipping 94f643af46ca12ae6a92c287a1c2aad65c2c3ddc4d9d80cec860963137185fb9 - not in ps
	I1124 09:06:07.858970  709503 cri.go:129] container: {ID:ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b Status:running}
	I1124 09:06:07.858975  709503 cri.go:131] skipping ba7095482d23ca0d2fcee762fdbbeea2c46e6535497242fedbdf28da0c621b3b - not in ps
	I1124 09:06:07.858980  709503 cri.go:129] container: {ID:e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf Status:running}
	I1124 09:06:07.858986  709503 cri.go:131] skipping e4f96999f5f1383176428b512b3ef0f99747176080743e8466d318aeb40590bf - not in ps
	I1124 09:06:07.859050  709503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:06:07.892125  709503 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:06:07.892148  709503 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:06:07.892207  709503 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:06:07.909145  709503 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:06:07.909911  709503 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-128377" does not appear in /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:07.910245  709503 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-435860/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-128377" cluster setting kubeconfig missing "old-k8s-version-128377" context setting]
	I1124 09:06:07.911503  709503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:07.914069  709503 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:06:07.930566  709503 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1124 09:06:07.930786  709503 kubeadm.go:602] duration metric: took 38.609119ms to restartPrimaryControlPlane
	I1124 09:06:07.930903  709503 kubeadm.go:403] duration metric: took 187.309002ms to StartCluster
	I1124 09:06:07.930972  709503 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:07.931189  709503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:07.933815  709503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:07.934627  709503 config.go:182] Loaded profile config "old-k8s-version-128377": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 09:06:07.934764  709503 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:06:07.934918  709503 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-128377"
	I1124 09:06:07.934939  709503 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-128377"
	W1124 09:06:07.934947  709503 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:06:07.934979  709503 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:06:07.934730  709503 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:06:07.935328  709503 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-128377"
	I1124 09:06:07.935353  709503 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-128377"
	W1124 09:06:07.935431  709503 addons.go:248] addon metrics-server should already be in state true
	I1124 09:06:07.935543  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.935314  709503 addons.go:70] Setting dashboard=true in profile "old-k8s-version-128377"
	I1124 09:06:07.935763  709503 addons.go:239] Setting addon dashboard=true in "old-k8s-version-128377"
	W1124 09:06:07.935776  709503 addons.go:248] addon dashboard should already be in state true
	I1124 09:06:07.935836  709503 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:06:07.935298  709503 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-128377"
	I1124 09:06:07.935911  709503 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-128377"
	I1124 09:06:07.936129  709503 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:06:07.936420  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.936429  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.937728  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.938151  709503 out.go:179] * Verifying Kubernetes components...
	I1124 09:06:07.939350  709503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:07.968860  709503 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-128377"
	W1124 09:06:07.968932  709503 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:06:07.968967  709503 host.go:66] Checking if "old-k8s-version-128377" exists ...
	I1124 09:06:07.969542  709503 cli_runner.go:164] Run: docker container inspect old-k8s-version-128377 --format={{.State.Status}}
	I1124 09:06:07.970612  709503 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:06:07.971688  709503 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:07.971709  709503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:06:07.971776  709503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:06:07.982548  709503 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:06:07.983751  709503 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:06:07.984943  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:06:07.984964  709503 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:06:07.985032  709503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:06:07.989064  709503 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 09:06:07.798355  710410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:06:07.959783  710410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:06:08.169711  710410 docker.go:234] disabling docker service ...
	I1124 09:06:08.170079  710410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:06:08.192752  710410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:06:08.217711  710410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:06:08.371537  710410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:06:08.520292  710410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:06:08.542357  710410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:06:08.567348  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:08.935352  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:06:08.946105  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:06:08.956076  710410 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:06:08.956151  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:06:08.965899  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:06:08.975290  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:06:08.984942  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:06:08.994561  710410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:06:09.003383  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:06:09.013261  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:06:09.023845  710410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:06:09.033552  710410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:06:09.041637  710410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:06:09.049555  710410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:09.149233  710410 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:06:09.260304  710410 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:06:09.260382  710410 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:06:09.265056  710410 start.go:564] Will wait 60s for crictl version
	I1124 09:06:09.265129  710410 ssh_runner.go:195] Run: which crictl
	I1124 09:06:09.269253  710410 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:06:09.298618  710410 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:06:09.298686  710410 ssh_runner.go:195] Run: containerd --version
	I1124 09:06:09.322033  710410 ssh_runner.go:195] Run: containerd --version
	I1124 09:06:09.346867  710410 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1124 09:06:05.478330  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:07.990188  709503 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:06:07.990211  709503 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:06:07.990277  709503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:06:08.019953  709503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:06:08.022995  709503 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:08.023018  709503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:06:08.023081  709503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-128377
	I1124 09:06:08.038684  709503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:06:08.047506  709503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:06:08.074610  709503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/old-k8s-version-128377/id_rsa Username:docker}
	I1124 09:06:08.213005  709503 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:06:08.213118  709503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 09:06:08.218819  709503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:08.229963  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:06:08.229989  709503 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:06:08.247835  709503 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:06:08.254634  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:06:08.254660  709503 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:06:08.255027  709503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:08.295607  709503 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:06:08.295682  709503 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:06:08.298266  709503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:08.311154  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:06:08.311197  709503 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:06:08.333308  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:06:08.333347  709503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:06:08.350278  709503 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:06:08.350304  709503 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:06:08.380567  709503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:06:08.382336  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:06:08.382375  709503 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:06:08.406934  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:06:08.406969  709503 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:06:08.450715  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:06:08.450745  709503 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:06:08.512388  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:06:08.512416  709503 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:06:08.534866  709503 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:06:08.534894  709503 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:06:08.568308  709503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:06:10.558760  709503 node_ready.go:49] node "old-k8s-version-128377" is "Ready"
	I1124 09:06:10.558793  709503 node_ready.go:38] duration metric: took 2.310917996s for node "old-k8s-version-128377" to be "Ready" ...
	I1124 09:06:10.558809  709503 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:06:10.558874  709503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:06:09.348190  710410 cli_runner.go:164] Run: docker network inspect no-preload-820576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:06:09.365511  710410 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 09:06:09.369983  710410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:09.380785  710410 kubeadm.go:884] updating cluster {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:06:09.381014  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:09.698668  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:10.063688  710410 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:10.401786  710410 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:06:10.401880  710410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:10.446642  710410 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:10.446676  710410 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:06:10.446687  710410 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1124 09:06:10.446829  710410 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:06:10.446907  710410 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:06:10.479317  710410 cni.go:84] Creating CNI manager for ""
	I1124 09:06:10.479342  710410 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:10.479365  710410 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:06:10.479414  710410 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820576 NodeName:no-preload-820576 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:06:10.479636  710410 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-820576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:06:10.479724  710410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:06:10.489536  710410 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:06:10.489618  710410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:06:10.498562  710410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1124 09:06:10.514039  710410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:06:10.530535  710410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1124 09:06:10.557382  710410 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:06:10.563118  710410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:10.589362  710410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:10.740319  710410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:10.771888  710410 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576 for IP: 192.168.85.2
	I1124 09:06:10.771931  710410 certs.go:195] generating shared ca certs ...
	I1124 09:06:10.771953  710410 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:10.773114  710410 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:06:10.773247  710410 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:06:10.773282  710410 certs.go:257] generating profile certs ...
	I1124 09:06:10.773446  710410 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.key
	I1124 09:06:10.773567  710410 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key.402ae632
	I1124 09:06:10.773625  710410 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key
	I1124 09:06:10.773794  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:06:10.773841  710410 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:06:10.773865  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:06:10.773909  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:06:10.773946  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:06:10.773982  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:06:10.774051  710410 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:10.774961  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:06:10.800274  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:06:10.824284  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:06:10.863611  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:06:10.896300  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:06:10.937202  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:06:10.967290  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:06:10.990246  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 09:06:11.011641  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:06:11.032149  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:06:11.070004  710410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:06:11.098006  710410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:06:11.112693  710410 ssh_runner.go:195] Run: openssl version
	I1124 09:06:11.120012  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:06:11.133685  710410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:06:11.142019  710410 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:06:11.142082  710410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:06:11.199392  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:06:11.208974  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:06:11.219230  710410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:06:11.224709  710410 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:06:11.224787  710410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:06:11.263304  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:06:11.273452  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:06:11.285214  710410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:11.290634  710410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:11.290697  710410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:11.334365  710410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:06:11.343999  710410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:06:11.349716  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:06:11.393022  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:06:11.429451  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:06:11.467433  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:06:11.523563  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:06:11.581537  710410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:06:11.715888  710410 kubeadm.go:401] StartCluster: {Name:no-preload-820576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-820576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:06:11.715993  710410 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:06:11.716044  710410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:06:11.746839  710410 cri.go:89] found id: "1ccff83dea1f3b004fd2da523645686868800b09a6997c0e238c4954c9b650b5"
	I1124 09:06:11.746867  710410 cri.go:89] found id: "372566a488aa6257b59eba829cf1e66299ccffe9066320bc512378d4a8f37fc3"
	I1124 09:06:11.746872  710410 cri.go:89] found id: "f013ec6444310f79abf35dd005056c59b873c4bea9b56849cc31c4d45f1fd1ea"
	I1124 09:06:11.746876  710410 cri.go:89] found id: "d11c1a1929cbd874879bd2ca658768b3b17486a565a73f3198763d8937ab7159"
	I1124 09:06:11.746879  710410 cri.go:89] found id: "3792977e1319f5110036c4177368941dfeff0808bfb81b4f1f9accba9dc895b0"
	I1124 09:06:11.746882  710410 cri.go:89] found id: "1cc365be5ed1fbe0ff7cbef3bba9928f6de3ee57c3a2f87a37b5414ce840c1e5"
	I1124 09:06:11.746885  710410 cri.go:89] found id: "942b50869b3b6efe304af13454ac7bcfcd639ee8d85edb9543534540fab1a5ac"
	I1124 09:06:11.746887  710410 cri.go:89] found id: "0d5c89e98d645bf73cd4c5c3f30b9202f3ec35a62f3f8d3ae062d5d623eccb24"
	I1124 09:06:11.746892  710410 cri.go:89] found id: ""
	I1124 09:06:11.746945  710410 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1124 09:06:11.761985  710410 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T09:06:11Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1124 09:06:11.762058  710410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:06:11.775299  710410 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:06:11.775320  710410 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:06:11.775372  710410 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:06:11.787178  710410 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:06:11.788096  710410 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-820576" does not appear in /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:11.788567  710410 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-435860/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-820576" cluster setting kubeconfig missing "no-preload-820576" context setting]
	I1124 09:06:11.789318  710410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:11.819317  710410 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:06:11.829219  710410 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 09:06:11.829255  710410 kubeadm.go:602] duration metric: took 53.926233ms to restartPrimaryControlPlane
	I1124 09:06:11.829264  710410 kubeadm.go:403] duration metric: took 113.387483ms to StartCluster
	I1124 09:06:11.829283  710410 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:11.829358  710410 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:11.830779  710410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:11.881377  710410 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:06:11.881518  710410 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:06:11.881659  710410 addons.go:70] Setting storage-provisioner=true in profile "no-preload-820576"
	I1124 09:06:11.881685  710410 config.go:182] Loaded profile config "no-preload-820576": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:06:11.881695  710410 addons.go:70] Setting metrics-server=true in profile "no-preload-820576"
	I1124 09:06:11.881692  710410 addons.go:70] Setting default-storageclass=true in profile "no-preload-820576"
	I1124 09:06:11.881713  710410 addons.go:239] Setting addon metrics-server=true in "no-preload-820576"
	W1124 09:06:11.881721  710410 addons.go:248] addon metrics-server should already be in state true
	I1124 09:06:11.881716  710410 addons.go:70] Setting dashboard=true in profile "no-preload-820576"
	I1124 09:06:11.881690  710410 addons.go:239] Setting addon storage-provisioner=true in "no-preload-820576"
	I1124 09:06:11.881718  710410 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820576"
	I1124 09:06:11.881736  710410 addons.go:239] Setting addon dashboard=true in "no-preload-820576"
	W1124 09:06:11.881743  710410 addons.go:248] addon storage-provisioner should already be in state true
	W1124 09:06:11.881745  710410 addons.go:248] addon dashboard should already be in state true
	I1124 09:06:11.881753  710410 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:06:11.881768  710410 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:06:11.881774  710410 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:06:11.882069  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.882237  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.882245  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.882250  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.939425  710410 out.go:179] * Verifying Kubernetes components...
	I1124 09:06:11.939931  710410 addons.go:239] Setting addon default-storageclass=true in "no-preload-820576"
	W1124 09:06:11.940692  710410 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:06:11.940739  710410 host.go:66] Checking if "no-preload-820576" exists ...
	I1124 09:06:11.941244  710410 cli_runner.go:164] Run: docker container inspect no-preload-820576 --format={{.State.Status}}
	I1124 09:06:11.941264  710410 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:06:11.941301  710410 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 09:06:11.941329  710410 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:06:11.946558  710410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:11.948132  710410 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:06:11.948155  710410 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:06:11.948179  710410 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:11.948196  710410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:06:11.948220  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:11.948266  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:11.953192  710410 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:06:07.757449  712609 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 09:06:07.757732  712609 start.go:159] libmachine.API.Create for "embed-certs-841285" (driver="docker")
	I1124 09:06:07.757769  712609 client.go:173] LocalClient.Create starting
	I1124 09:06:07.757822  712609 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem
	I1124 09:06:07.757857  712609 main.go:143] libmachine: Decoding PEM data...
	I1124 09:06:07.757876  712609 main.go:143] libmachine: Parsing certificate...
	I1124 09:06:07.757933  712609 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem
	I1124 09:06:07.757954  712609 main.go:143] libmachine: Decoding PEM data...
	I1124 09:06:07.757966  712609 main.go:143] libmachine: Parsing certificate...
	I1124 09:06:07.758289  712609 cli_runner.go:164] Run: docker network inspect embed-certs-841285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 09:06:07.786287  712609 cli_runner.go:211] docker network inspect embed-certs-841285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 09:06:07.786412  712609 network_create.go:284] running [docker network inspect embed-certs-841285] to gather additional debugging logs...
	I1124 09:06:07.786444  712609 cli_runner.go:164] Run: docker network inspect embed-certs-841285
	W1124 09:06:07.812736  712609 cli_runner.go:211] docker network inspect embed-certs-841285 returned with exit code 1
	I1124 09:06:07.812786  712609 network_create.go:287] error running [docker network inspect embed-certs-841285]: docker network inspect embed-certs-841285: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-841285 not found
	I1124 09:06:07.812805  712609 network_create.go:289] output of [docker network inspect embed-certs-841285]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-841285 not found
	
	** /stderr **
	I1124 09:06:07.812915  712609 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:06:07.838220  712609 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
	I1124 09:06:07.839216  712609 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1081c4000c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:b1:6d:32:2c:78} reservation:<nil>}
	I1124 09:06:07.840271  712609 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30fdd1988974 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:59:2f:0a:61:81} reservation:<nil>}
	I1124 09:06:07.841370  712609 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cd297979890 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:91:f3:e4:95:17} reservation:<nil>}
	I1124 09:06:07.842376  712609 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7957ce7dc9ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:7d:52:b6:17:25} reservation:<nil>}
	I1124 09:06:07.843628  712609 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d42cf0}
	I1124 09:06:07.843668  712609 network_create.go:124] attempt to create docker network embed-certs-841285 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 09:06:07.843740  712609 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-841285 embed-certs-841285
	I1124 09:06:07.940716  712609 network_create.go:108] docker network embed-certs-841285 192.168.94.0/24 created
	I1124 09:06:07.940787  712609 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-841285" container
	I1124 09:06:07.940887  712609 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 09:06:07.991813  712609 cli_runner.go:164] Run: docker volume create embed-certs-841285 --label name.minikube.sigs.k8s.io=embed-certs-841285 --label created_by.minikube.sigs.k8s.io=true
	I1124 09:06:08.061119  712609 oci.go:103] Successfully created a docker volume embed-certs-841285
	I1124 09:06:08.061364  712609 cli_runner.go:164] Run: docker run --rm --name embed-certs-841285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-841285 --entrypoint /usr/bin/test -v embed-certs-841285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 09:06:08.685239  712609 oci.go:107] Successfully prepared a docker volume embed-certs-841285
	I1124 09:06:08.685329  712609 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1124 09:06:08.685345  712609 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 09:06:08.685429  712609 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-841285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 09:06:11.957004  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:06:11.957029  710410 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:06:11.957098  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:11.977858  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:11.980623  710410 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:11.980648  710410 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:06:11.980706  710410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820576
	I1124 09:06:11.987358  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:11.995845  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:12.012731  710410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/no-preload-820576/id_rsa Username:docker}
	I1124 09:06:12.116424  710410 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:06:12.116446  710410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 09:06:12.124317  710410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:12.140300  710410 node_ready.go:35] waiting up to 6m0s for node "no-preload-820576" to be "Ready" ...
	I1124 09:06:12.145652  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:06:12.145676  710410 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:06:12.145652  710410 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:06:12.145723  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:12.145726  710410 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:06:12.145895  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:12.167372  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:06:12.167400  710410 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:06:12.188298  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:06:12.188336  710410 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:06:12.189071  710410 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:06:12.189091  710410 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:06:12.208709  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:06:12.208735  710410 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:06:12.212245  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:06:12.251739  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:06:12.251780  710410 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1124 09:06:12.254669  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.254725  710410 retry.go:31] will retry after 267.520426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:06:12.254757  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.254783  710410 retry.go:31] will retry after 187.263022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.267555  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:06:12.267581  710410 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1124 09:06:12.271523  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.271557  710410 retry.go:31] will retry after 197.857566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.280900  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:06:12.280922  710410 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:06:12.293352  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:06:12.293374  710410 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:06:12.305732  710410 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:06:12.305754  710410 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:06:12.393825  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:06:12.442360  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1124 09:06:12.459398  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.459609  710410 retry.go:31] will retry after 128.110746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.470528  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1124 09:06:12.515023  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.515066  710410 retry.go:31] will retry after 492.443212ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.523209  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1124 09:06:12.537365  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.537415  710410 retry.go:31] will retry after 547.534652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1124 09:06:12.576068  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.576105  710410 retry.go:31] will retry after 490.57105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.588191  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1124 09:06:12.645758  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:12.645813  710410 retry.go:31] will retry after 546.072247ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:11.569200  709503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.314051805s)
	I1124 09:06:12.034820  709503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.736518516s)
	I1124 09:06:12.154054  709503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.773444144s)
	I1124 09:06:12.154100  709503 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-128377"
	I1124 09:06:13.064354  709503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.495850323s)
	I1124 09:06:13.064429  709503 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.505387882s)
	I1124 09:06:13.064449  709503 api_server.go:72] duration metric: took 5.129072136s to wait for apiserver process to appear ...
	I1124 09:06:13.064626  709503 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:06:13.064742  709503 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:06:13.067049  709503 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-128377 addons enable metrics-server
	
	I1124 09:06:13.068589  709503 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1124 09:06:10.479269  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 09:06:10.479328  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:10.479389  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:10.510533  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:10.510577  685562 cri.go:89] found id: "1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:06:10.510583  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:10.510586  685562 cri.go:89] found id: ""
	I1124 09:06:10.510593  685562 logs.go:282] 3 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:10.510670  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.515076  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.519239  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.523408  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:10.523496  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:10.573118  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:10.573140  685562 cri.go:89] found id: ""
	I1124 09:06:10.573151  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:10.573203  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.580440  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:10.580552  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:10.633397  685562 cri.go:89] found id: ""
	I1124 09:06:10.633453  685562 logs.go:282] 0 containers: []
	W1124 09:06:10.633475  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:10.633493  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:10.633564  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:10.690354  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:10.690382  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:10.690413  685562 cri.go:89] found id: ""
	I1124 09:06:10.690423  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:10.690531  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.695963  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.701490  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:10.701564  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:10.737302  685562 cri.go:89] found id: ""
	I1124 09:06:10.737334  685562 logs.go:282] 0 containers: []
	W1124 09:06:10.737346  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:10.737355  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:10.737429  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:10.775391  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:10.775414  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:10.775432  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:10.775437  685562 cri.go:89] found id: ""
	I1124 09:06:10.775447  685562 logs.go:282] 3 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:10.775534  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.781150  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.786536  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:10.792009  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:10.792081  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:10.834058  685562 cri.go:89] found id: ""
	I1124 09:06:10.834086  685562 logs.go:282] 0 containers: []
	W1124 09:06:10.834096  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:10.834105  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:10.834176  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:10.878003  685562 cri.go:89] found id: ""
	I1124 09:06:10.878038  685562 logs.go:282] 0 containers: []
	W1124 09:06:10.878049  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:10.878062  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:10.878087  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:10.933766  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:10.933861  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:10.979203  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:10.979242  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:11.070829  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:11.070863  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 09:06:13.007920  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:13.067827  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:13.085967  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1124 09:06:13.158832  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:13.158873  710410 retry.go:31] will retry after 555.195364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:13.193126  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1124 09:06:13.228891  710410 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:13.228930  710410 retry.go:31] will retry after 606.090345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1124 09:06:13.714698  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:13.835800  710410 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:14.767388  710410 node_ready.go:49] node "no-preload-820576" is "Ready"
	I1124 09:06:14.767429  710410 node_ready.go:38] duration metric: took 2.627095095s for node "no-preload-820576" to be "Ready" ...
	I1124 09:06:14.767447  710410 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:06:14.767526  710410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:06:15.446416  710410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.360392286s)
	I1124 09:06:15.446753  710410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.253580665s)
	I1124 09:06:15.447060  710410 addons.go:495] Verifying addon metrics-server=true in "no-preload-820576"
	I1124 09:06:15.448304  710410 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-820576 addons enable metrics-server
	
	I1124 09:06:15.502159  710410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.787411152s)
	I1124 09:06:15.502312  710410 api_server.go:72] duration metric: took 3.620869952s to wait for apiserver process to appear ...
	I1124 09:06:15.502330  710410 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:06:15.502354  710410 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 09:06:15.502435  710410 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.666417463s)
	I1124 09:06:15.507693  710410 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:06:15.507720  710410 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:06:15.510070  710410 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I1124 09:06:13.069584  709503 addons.go:530] duration metric: took 5.134824432s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1124 09:06:13.074420  709503 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1124 09:06:13.074441  709503 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1124 09:06:13.565056  709503 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:06:13.573074  709503 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:06:13.576874  709503 api_server.go:141] control plane version: v1.28.0
	I1124 09:06:13.576905  709503 api_server.go:131] duration metric: took 512.183788ms to wait for apiserver health ...
	I1124 09:06:13.576916  709503 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:06:13.584383  709503 system_pods.go:59] 9 kube-system pods found
	I1124 09:06:13.584495  709503 system_pods.go:61] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:13.584512  709503 system_pods.go:61] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:06:13.584522  709503 system_pods.go:61] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:06:13.584532  709503 system_pods.go:61] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:06:13.584541  709503 system_pods.go:61] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:06:13.584561  709503 system_pods.go:61] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:06:13.584568  709503 system_pods.go:61] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:06:13.584576  709503 system_pods.go:61] "metrics-server-57f55c9bc5-77qfh" [cdcc0048-22cc-48f4-be39-99715f4aaa66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:06:13.584583  709503 system_pods.go:61] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:13.584592  709503 system_pods.go:74] duration metric: took 7.668146ms to wait for pod list to return data ...
	I1124 09:06:13.584602  709503 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:06:13.588282  709503 default_sa.go:45] found service account: "default"
	I1124 09:06:13.588332  709503 default_sa.go:55] duration metric: took 3.724838ms for default service account to be created ...
	I1124 09:06:13.588350  709503 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:06:13.592454  709503 system_pods.go:86] 9 kube-system pods found
	I1124 09:06:13.592506  709503 system_pods.go:89] "coredns-5dd5756b68-vxxnm" [b84bae0f-9f75-4d1c-b2ed-da0c10a141cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:13.592520  709503 system_pods.go:89] "etcd-old-k8s-version-128377" [57d9a965-4f1a-455f-beec-16601bd921e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:06:13.592530  709503 system_pods.go:89] "kindnet-gbp66" [49954742-ea7f-466f-80d8-7d6ac88ce36c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:06:13.592541  709503 system_pods.go:89] "kube-apiserver-old-k8s-version-128377" [08c8bb94-e597-4293-80f1-0981f51b22a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:06:13.592554  709503 system_pods.go:89] "kube-controller-manager-old-k8s-version-128377" [1f721a4b-e1c3-4e18-92b4-13673dc37600] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:06:13.592567  709503 system_pods.go:89] "kube-proxy-fpbs2" [52128126-550d-4795-9fa1-e1d3d9510dd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:06:13.592578  709503 system_pods.go:89] "kube-scheduler-old-k8s-version-128377" [399dcc23-9970-4146-82b3-c72d3e5f621b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:06:13.592588  709503 system_pods.go:89] "metrics-server-57f55c9bc5-77qfh" [cdcc0048-22cc-48f4-be39-99715f4aaa66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:06:13.592606  709503 system_pods.go:89] "storage-provisioner" [7e4f56c0-0b49-47cd-9278-129ad898b781] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:13.592616  709503 system_pods.go:126] duration metric: took 4.252001ms to wait for k8s-apps to be running ...
	I1124 09:06:13.592626  709503 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:06:13.592674  709503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:06:13.612442  709503 system_svc.go:56] duration metric: took 19.805358ms WaitForService to wait for kubelet
	I1124 09:06:13.612506  709503 kubeadm.go:587] duration metric: took 5.677127372s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:06:13.612540  709503 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:06:13.615980  709503 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:06:13.616017  709503 node_conditions.go:123] node cpu capacity is 8
	I1124 09:06:13.616037  709503 node_conditions.go:105] duration metric: took 3.491408ms to run NodePressure ...
	I1124 09:06:13.616060  709503 start.go:242] waiting for startup goroutines ...
	I1124 09:06:13.616072  709503 start.go:247] waiting for cluster config update ...
	I1124 09:06:13.616087  709503 start.go:256] writing updated cluster config ...
	I1124 09:06:13.616411  709503 ssh_runner.go:195] Run: rm -f paused
	I1124 09:06:13.622586  709503 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:13.628591  709503 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:06:15.638301  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:12.955135  712609 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-841285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.269650036s)
	I1124 09:06:12.955177  712609 kic.go:203] duration metric: took 4.269827271s to extract preloaded images to volume ...
	W1124 09:06:12.955271  712609 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 09:06:12.955307  712609 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 09:06:12.955360  712609 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 09:06:13.076133  712609 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-841285 --name embed-certs-841285 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-841285 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-841285 --network embed-certs-841285 --ip 192.168.94.2 --volume embed-certs-841285:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 09:06:13.540475  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Running}}
	I1124 09:06:13.565052  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:13.591297  712609 cli_runner.go:164] Run: docker exec embed-certs-841285 stat /var/lib/dpkg/alternatives/iptables
	I1124 09:06:13.656882  712609 oci.go:144] the created container "embed-certs-841285" has a running status.
	I1124 09:06:13.656945  712609 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa...
	I1124 09:06:13.819842  712609 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 09:06:13.853629  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:13.880952  712609 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 09:06:13.880975  712609 kic_runner.go:114] Args: [docker exec --privileged embed-certs-841285 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 09:06:13.938355  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:13.964024  712609 machine.go:94] provisionDockerMachine start ...
	I1124 09:06:13.964165  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:13.997714  712609 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:13.998308  712609 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 09:06:13.998364  712609 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:06:13.999301  712609 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54278->127.0.0.1:33083: read: connection reset by peer
	I1124 09:06:17.148399  712609 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-841285
	
	I1124 09:06:17.148432  712609 ubuntu.go:182] provisioning hostname "embed-certs-841285"
	I1124 09:06:17.148523  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:17.169142  712609 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:17.169368  712609 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 09:06:17.169382  712609 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-841285 && echo "embed-certs-841285" | sudo tee /etc/hostname
	I1124 09:06:17.328945  712609 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-841285
	
	I1124 09:06:17.329026  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:17.346388  712609 main.go:143] libmachine: Using SSH client type: native
	I1124 09:06:17.346664  712609 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1124 09:06:17.346683  712609 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-841285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-841285/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-841285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:06:15.511184  710410 addons.go:530] duration metric: took 3.629676818s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I1124 09:06:16.002642  710410 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 09:06:16.009012  710410 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 09:06:16.010266  710410 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:06:16.010304  710410 api_server.go:131] duration metric: took 507.960092ms to wait for apiserver health ...
	I1124 09:06:16.010318  710410 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:06:16.014692  710410 system_pods.go:59] 9 kube-system pods found
	I1124 09:06:16.014742  710410 system_pods.go:61] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:16.014756  710410 system_pods.go:61] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:06:16.014777  710410 system_pods.go:61] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:06:16.014826  710410 system_pods.go:61] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:06:16.014841  710410 system_pods.go:61] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:06:16.014851  710410 system_pods.go:61] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:06:16.014864  710410 system_pods.go:61] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:06:16.014872  710410 system_pods.go:61] "metrics-server-5d785b57d4-pd54z" [09e6bd80-a8d1-4b28-b18a-094e3667ef9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:06:16.014890  710410 system_pods.go:61] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:16.014898  710410 system_pods.go:74] duration metric: took 4.569905ms to wait for pod list to return data ...
	I1124 09:06:16.014907  710410 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:06:16.017234  710410 default_sa.go:45] found service account: "default"
	I1124 09:06:16.017256  710410 default_sa.go:55] duration metric: took 2.341243ms for default service account to be created ...
	I1124 09:06:16.017265  710410 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:06:16.020426  710410 system_pods.go:86] 9 kube-system pods found
	I1124 09:06:16.020482  710410 system_pods.go:89] "coredns-7d764666f9-b6dpn" [c84a0b09-07a2-4e6a-928a-b9aca9e3b1a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:16.020495  710410 system_pods.go:89] "etcd-no-preload-820576" [39f892d7-184f-4858-be8f-174718ac6aaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:06:16.020506  710410 system_pods.go:89] "kindnet-kvm52" [967c23e8-7e42-4034-b5a2-e4cd65bc4d94] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:06:16.020514  710410 system_pods.go:89] "kube-apiserver-no-preload-820576" [d5294a7a-2337-4ef4-82a2-20d85daf8739] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:06:16.020525  710410 system_pods.go:89] "kube-controller-manager-no-preload-820576" [e6320a0d-f5cf-4a17-af3d-6fa87f1e02ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:06:16.020536  710410 system_pods.go:89] "kube-proxy-vz24l" [4a64a474-1e1b-411d-aea6-9d12e1d9f84e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:06:16.020544  710410 system_pods.go:89] "kube-scheduler-no-preload-820576" [9fd536e3-1a01-4c16-bf46-75db8f38b3f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:06:16.020555  710410 system_pods.go:89] "metrics-server-5d785b57d4-pd54z" [09e6bd80-a8d1-4b28-b18a-094e3667ef9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 09:06:16.020569  710410 system_pods.go:89] "storage-provisioner" [144d237b-4f80-441d-867b-0ee26edd8590] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:16.020580  710410 system_pods.go:126] duration metric: took 3.30745ms to wait for k8s-apps to be running ...
	I1124 09:06:16.020593  710410 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:06:16.020644  710410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:06:16.037995  710410 system_svc.go:56] duration metric: took 17.390664ms WaitForService to wait for kubelet
	I1124 09:06:16.038027  710410 kubeadm.go:587] duration metric: took 4.156587016s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:06:16.038052  710410 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:06:16.040600  710410 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:06:16.040626  710410 node_conditions.go:123] node cpu capacity is 8
	I1124 09:06:16.040644  710410 node_conditions.go:105] duration metric: took 2.58546ms to run NodePressure ...
	I1124 09:06:16.040658  710410 start.go:242] waiting for startup goroutines ...
	I1124 09:06:16.040672  710410 start.go:247] waiting for cluster config update ...
	I1124 09:06:16.040687  710410 start.go:256] writing updated cluster config ...
	I1124 09:06:16.041014  710410 ssh_runner.go:195] Run: rm -f paused
	I1124 09:06:16.045332  710410 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:16.048757  710410 pod_ready.go:83] waiting for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:17.491372  712609 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:06:17.491411  712609 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:06:17.491444  712609 ubuntu.go:190] setting up certificates
	I1124 09:06:17.491502  712609 provision.go:84] configureAuth start
	I1124 09:06:17.491582  712609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-841285
	I1124 09:06:17.509416  712609 provision.go:143] copyHostCerts
	I1124 09:06:17.509497  712609 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:06:17.509513  712609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:06:17.509698  712609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:06:17.509870  712609 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:06:17.509885  712609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:06:17.509930  712609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:06:17.510041  712609 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:06:17.510054  712609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:06:17.510092  712609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:06:17.510183  712609 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.embed-certs-841285 san=[127.0.0.1 192.168.94.2 embed-certs-841285 localhost minikube]
	I1124 09:06:17.622425  712609 provision.go:177] copyRemoteCerts
	I1124 09:06:17.622510  712609 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:06:17.622560  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:17.640855  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:17.744127  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:06:17.764220  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:06:17.782902  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:06:17.801085  712609 provision.go:87] duration metric: took 309.559848ms to configureAuth
	I1124 09:06:17.801119  712609 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:06:17.801320  712609 config.go:182] Loaded profile config "embed-certs-841285": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:06:17.801334  712609 machine.go:97] duration metric: took 3.837283638s to provisionDockerMachine
	I1124 09:06:17.801342  712609 client.go:176] duration metric: took 10.043568101s to LocalClient.Create
	I1124 09:06:17.801360  712609 start.go:167] duration metric: took 10.04363162s to libmachine.API.Create "embed-certs-841285"
	I1124 09:06:17.801369  712609 start.go:293] postStartSetup for "embed-certs-841285" (driver="docker")
	I1124 09:06:17.801378  712609 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:06:17.801431  712609 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:06:17.801498  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:17.820054  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:17.929888  712609 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:06:17.934299  712609 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:06:17.934331  712609 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:06:17.934361  712609 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:06:17.934428  712609 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:06:17.934583  712609 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:06:17.934723  712609 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:06:17.944993  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:17.969913  712609 start.go:296] duration metric: took 168.526621ms for postStartSetup
	I1124 09:06:17.970380  712609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-841285
	I1124 09:06:17.996605  712609 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/config.json ...
	I1124 09:06:17.996936  712609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:06:17.996994  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:18.018740  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:18.128353  712609 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:06:18.133747  712609 start.go:128] duration metric: took 10.377814334s to createHost
	I1124 09:06:18.133774  712609 start.go:83] releasing machines lock for "embed-certs-841285", held for 10.377970244s
	I1124 09:06:18.133876  712609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-841285
	I1124 09:06:18.150815  712609 ssh_runner.go:195] Run: cat /version.json
	I1124 09:06:18.150874  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:18.150943  712609 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:06:18.151022  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:18.169533  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:18.169804  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:18.269428  712609 ssh_runner.go:195] Run: systemctl --version
	I1124 09:06:18.321761  712609 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:06:18.327046  712609 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:06:18.327133  712609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:06:18.352096  712609 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 09:06:18.352118  712609 start.go:496] detecting cgroup driver to use...
	I1124 09:06:18.352148  712609 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:06:18.352186  712609 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:06:18.366957  712609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:06:18.381693  712609 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:06:18.381752  712609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:06:18.398113  712609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:06:18.415593  712609 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:06:18.502067  712609 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:06:18.601361  712609 docker.go:234] disabling docker service ...
	I1124 09:06:18.601437  712609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:06:18.623658  712609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:06:18.639727  712609 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:06:18.740531  712609 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:06:18.828884  712609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:06:18.842742  712609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:06:18.857868  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:19.175440  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:06:19.187113  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:06:19.196765  712609 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:06:19.196825  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:06:19.208310  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:06:19.218395  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:06:19.228392  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:06:19.237420  712609 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:06:19.245996  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:06:19.255260  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:06:19.264330  712609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:06:19.273668  712609 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:06:19.281360  712609 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:06:19.289193  712609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:19.364645  712609 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:06:19.463547  712609 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:06:19.463645  712609 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:06:19.467939  712609 start.go:564] Will wait 60s for crictl version
	I1124 09:06:19.467997  712609 ssh_runner.go:195] Run: which crictl
	I1124 09:06:19.472220  712609 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:06:19.499311  712609 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:06:19.499385  712609 ssh_runner.go:195] Run: containerd --version
	I1124 09:06:19.521824  712609 ssh_runner.go:195] Run: containerd --version
	I1124 09:06:19.545239  712609 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.5 ...
	W1124 09:06:18.134936  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:20.633103  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:19.546299  712609 cli_runner.go:164] Run: docker network inspect embed-certs-841285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:06:19.564025  712609 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 09:06:19.568256  712609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:19.579411  712609 kubeadm.go:884] updating cluster {Name:embed-certs-841285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:06:19.579631  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:19.895986  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:20.213647  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:20.537503  712609 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1124 09:06:20.537655  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:20.844686  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:21.154327  712609 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
	I1124 09:06:21.492353  712609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:21.518072  712609 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:21.518095  712609 containerd.go:534] Images already preloaded, skipping extraction
	I1124 09:06:21.518159  712609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:06:21.543595  712609 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:06:21.543618  712609 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:06:21.543626  712609 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 containerd true true} ...
	I1124 09:06:21.543712  712609 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-841285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:06:21.543772  712609 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:06:21.574910  712609 cni.go:84] Creating CNI manager for ""
	I1124 09:06:21.574936  712609 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:21.574957  712609 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 09:06:21.574989  712609 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-841285 NodeName:embed-certs-841285 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:06:21.575132  712609 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-841285"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:06:21.575206  712609 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1124 09:06:21.583842  712609 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:06:21.583925  712609 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:06:21.591929  712609 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 09:06:21.604987  712609 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 09:06:21.621814  712609 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1124 09:06:21.635273  712609 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:06:21.638971  712609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:06:21.649297  712609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:21.739776  712609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:21.764758  712609 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285 for IP: 192.168.94.2
	I1124 09:06:21.764785  712609 certs.go:195] generating shared ca certs ...
	I1124 09:06:21.764810  712609 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.764986  712609 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:06:21.765033  712609 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:06:21.765044  712609 certs.go:257] generating profile certs ...
	I1124 09:06:21.765102  712609 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.key
	I1124 09:06:21.765114  712609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.crt with IP's: []
	I1124 09:06:21.864750  712609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.crt ...
	I1124 09:06:21.864775  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.crt: {Name:mkc060bfda49863ba613e074874e844ca9a9e70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.864958  712609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.key ...
	I1124 09:06:21.864973  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/client.key: {Name:mkd5104c3dae3b5f7ae3fa31a87f62c7e96b054a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.865062  712609 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key.97c836bb
	I1124 09:06:21.865080  712609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt.97c836bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 09:06:21.904289  712609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt.97c836bb ...
	I1124 09:06:21.904314  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt.97c836bb: {Name:mkda4f19a07c086a3f5c62a810713f45695762dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.904472  712609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key.97c836bb ...
	I1124 09:06:21.904486  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key.97c836bb: {Name:mk8047fab627a190f575ab4aeb5179696588ecee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.904563  712609 certs.go:382] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt.97c836bb -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt
	I1124 09:06:21.904638  712609 certs.go:386] copying /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key.97c836bb -> /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key
	I1124 09:06:21.904692  712609 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.key
	I1124 09:06:21.904707  712609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.crt with IP's: []
	I1124 09:06:21.962903  712609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.crt ...
	I1124 09:06:21.962931  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.crt: {Name:mk2ac14b7d31660738cdb7ddd69ce29a7ebf81c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.963075  712609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.key ...
	I1124 09:06:21.963090  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.key: {Name:mk861035d219c3f6a3f9576912efeef0ad1f2764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:21.963267  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:06:21.963310  712609 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:06:21.963320  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:06:21.963351  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:06:21.963376  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:06:21.963398  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:06:21.963445  712609 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:06:21.964070  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:06:21.985738  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:06:22.006551  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:06:22.027007  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:06:22.047398  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 09:06:22.069149  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:06:22.088426  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:06:22.108672  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/embed-certs-841285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:06:22.129917  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:06:22.154617  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:06:22.175965  712609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:06:22.197185  712609 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:06:22.212418  712609 ssh_runner.go:195] Run: openssl version
	I1124 09:06:22.220166  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:06:22.229632  712609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:22.234267  712609 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:22.234327  712609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:06:22.279000  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:06:22.289299  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:06:22.299120  712609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:06:22.303121  712609 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:06:22.303174  712609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:06:22.342953  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:06:22.353364  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:06:22.363375  712609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:06:22.367741  712609 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:06:22.367795  712609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:06:22.417612  712609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:06:22.428519  712609 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:06:22.432272  712609 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 09:06:22.432340  712609 kubeadm.go:401] StartCluster: {Name:embed-certs-841285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-841285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:06:22.432434  712609 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:06:22.432540  712609 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:06:22.465522  712609 cri.go:89] found id: ""
	I1124 09:06:22.465607  712609 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:06:22.474541  712609 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 09:06:22.483474  712609 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 09:06:22.483532  712609 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 09:06:22.492207  712609 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 09:06:22.492228  712609 kubeadm.go:158] found existing configuration files:
	
	I1124 09:06:22.492272  712609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 09:06:22.500211  712609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 09:06:22.500267  712609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 09:06:22.508026  712609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 09:06:22.516932  712609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 09:06:22.516975  712609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 09:06:22.525873  712609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 09:06:22.534520  712609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 09:06:22.534574  712609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 09:06:22.543311  712609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 09:06:22.552688  712609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 09:06:22.552736  712609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 09:06:22.561991  712609 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 09:06:22.608133  712609 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1124 09:06:22.608234  712609 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 09:06:22.630269  712609 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 09:06:22.630387  712609 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 09:06:22.630455  712609 kubeadm.go:319] OS: Linux
	I1124 09:06:22.630534  712609 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 09:06:22.630621  712609 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 09:06:22.630695  712609 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 09:06:22.630774  712609 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 09:06:22.630857  712609 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 09:06:22.630942  712609 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 09:06:22.631008  712609 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 09:06:22.631088  712609 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 09:06:22.699764  712609 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 09:06:22.699918  712609 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 09:06:22.700047  712609 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 09:06:22.705501  712609 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 09:06:18.067100  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:20.554983  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:21.157595  685562 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.08670591s)
	W1124 09:06:21.157642  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 09:06:21.157655  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:21.157675  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:21.191156  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:21.191193  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:21.226292  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:21.226323  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:21.260806  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:21.260836  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:21.304040  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:21.304069  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:21.318332  685562 logs.go:123] Gathering logs for kube-apiserver [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365] ...
	I1124 09:06:21.318357  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:06:21.352772  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:21.352805  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:21.384887  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:21.384916  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:21.413079  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:21.413105  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:21.439058  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:06:21.439086  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:23.966537  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1124 09:06:22.635345  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:25.134573  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:22.709334  712609 out.go:252]   - Generating certificates and keys ...
	I1124 09:06:22.709444  712609 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 09:06:22.709600  712609 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 09:06:23.287709  712609 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 09:06:23.440107  712609 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 09:06:23.712858  712609 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 09:06:23.920983  712609 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 09:06:24.576354  712609 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 09:06:24.576583  712609 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-841285 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 09:06:25.340646  712609 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 09:06:25.340931  712609 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-841285 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 09:06:25.560248  712609 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 09:06:25.902615  712609 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 09:06:26.142353  712609 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 09:06:26.142521  712609 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 09:06:26.237440  712609 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 09:06:26.780742  712609 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 09:06:26.979631  712609 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 09:06:27.137635  712609 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 09:06:27.529861  712609 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 09:06:27.530452  712609 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 09:06:27.535586  712609 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 09:06:23.055074  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:25.555355  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:25.205914  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:50432->192.168.76.2:8443: read: connection reset by peer
	I1124 09:06:25.205996  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:25.206062  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:25.239861  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:25.239889  685562 cri.go:89] found id: "1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:06:25.239895  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:25.239901  685562 cri.go:89] found id: ""
	I1124 09:06:25.239912  685562 logs.go:282] 3 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:25.239978  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.244271  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.248558  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.252330  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:25.252389  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:25.280363  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:25.280387  685562 cri.go:89] found id: ""
	I1124 09:06:25.280399  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:25.280496  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.284837  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:25.284895  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:25.311596  685562 cri.go:89] found id: ""
	I1124 09:06:25.311624  685562 logs.go:282] 0 containers: []
	W1124 09:06:25.311635  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:25.311644  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:25.311701  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:25.339841  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:25.339864  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:25.339868  685562 cri.go:89] found id: ""
	I1124 09:06:25.339876  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:25.339949  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.344303  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.348701  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:25.348761  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:25.376996  685562 cri.go:89] found id: ""
	I1124 09:06:25.377021  685562 logs.go:282] 0 containers: []
	W1124 09:06:25.377031  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:25.377040  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:25.377099  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:25.403929  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:25.403953  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:25.403959  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:25.403964  685562 cri.go:89] found id: ""
	I1124 09:06:25.403973  685562 logs.go:282] 3 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:25.404026  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.408011  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.412018  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:25.415684  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:25.415744  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:25.443570  685562 cri.go:89] found id: ""
	I1124 09:06:25.443597  685562 logs.go:282] 0 containers: []
	W1124 09:06:25.443609  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:25.443617  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:25.443677  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:25.471902  685562 cri.go:89] found id: ""
	I1124 09:06:25.471937  685562 logs.go:282] 0 containers: []
	W1124 09:06:25.471948  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:25.471962  685562 logs.go:123] Gathering logs for kube-apiserver [1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365] ...
	I1124 09:06:25.471979  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1c3ac7689834f46a67038f7d9d8749dd11964dbb2214dc5f58152210452bc365"
	I1124 09:06:25.506524  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:25.506556  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:25.545245  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:06:25.545276  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:25.578503  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:25.578540  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:25.616739  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:25.616770  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:25.661551  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:25.661582  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:25.694323  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:25.694356  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:25.709071  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:25.709097  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:25.770429  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:25.770452  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:25.770502  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:25.809925  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:25.809960  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:25.844164  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:25.844194  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:25.872097  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:25.872128  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:25.900658  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:25.900686  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:25.981821  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:25.981857  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:28.514526  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:28.515025  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:28.515093  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:28.515149  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:28.548258  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:28.548286  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:28.548293  685562 cri.go:89] found id: ""
	I1124 09:06:28.548303  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:28.548371  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.553603  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.558175  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:28.558298  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:28.596802  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:28.596826  685562 cri.go:89] found id: ""
	I1124 09:06:28.596838  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:28.596894  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.602045  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:28.602127  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:28.636975  685562 cri.go:89] found id: ""
	I1124 09:06:28.637002  685562 logs.go:282] 0 containers: []
	W1124 09:06:28.637018  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:28.637026  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:28.637089  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:28.672539  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:28.672577  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:28.672584  685562 cri.go:89] found id: ""
	I1124 09:06:28.672594  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:28.672658  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.677886  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.682559  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:28.682629  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:28.714211  685562 cri.go:89] found id: ""
	I1124 09:06:28.714242  685562 logs.go:282] 0 containers: []
	W1124 09:06:28.714253  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:28.714262  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:28.714327  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:28.749220  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:28.749254  685562 cri.go:89] found id: "4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:28.749260  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:28.749264  685562 cri.go:89] found id: ""
	I1124 09:06:28.749274  685562 logs.go:282] 3 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:28.749337  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.754530  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.758971  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:28.763632  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:28.763702  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:28.800732  685562 cri.go:89] found id: ""
	I1124 09:06:28.800760  685562 logs.go:282] 0 containers: []
	W1124 09:06:28.800771  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:28.800780  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:28.800852  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:28.836364  685562 cri.go:89] found id: ""
	I1124 09:06:28.836401  685562 logs.go:282] 0 containers: []
	W1124 09:06:28.836412  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:28.836425  685562 logs.go:123] Gathering logs for kube-controller-manager [4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d] ...
	I1124 09:06:28.836508  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4fe764a0d4480b2b9c1a7e51dc63c845a71b6a2a78a4861dbbf794ad3bd3079d"
	I1124 09:06:28.865658  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:28.865685  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:28.902970  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:28.903005  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:28.948455  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:28.948504  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:28.983980  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:28.984010  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:29.070849  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:29.070890  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:29.088719  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:29.088760  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:29.152338  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:29.152362  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:29.152385  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:29.189194  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:29.189234  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:29.228399  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:29.228437  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:29.270425  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:29.270488  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:29.310086  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:29.310117  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:29.349346  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:29.349377  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	W1124 09:06:27.135771  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:29.634500  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:27.536998  712609 out.go:252]   - Booting up control plane ...
	I1124 09:06:27.537131  712609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 09:06:27.537241  712609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 09:06:27.537890  712609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 09:06:27.557360  712609 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 09:06:27.557556  712609 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 09:06:27.566014  712609 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 09:06:27.566352  712609 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 09:06:27.566429  712609 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 09:06:27.689337  712609 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 09:06:27.689539  712609 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 09:06:29.690081  712609 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000905789s
	I1124 09:06:29.695079  712609 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 09:06:29.695207  712609 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 09:06:29.695315  712609 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 09:06:29.695440  712609 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 09:06:30.732893  712609 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.037758856s
	I1124 09:06:31.697336  712609 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.002233718s
	W1124 09:06:28.055145  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:30.055642  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:32.554238  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:33.196787  712609 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501610996s
	I1124 09:06:33.211759  712609 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 09:06:33.220742  712609 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 09:06:33.228614  712609 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 09:06:33.228906  712609 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-841285 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 09:06:33.236403  712609 kubeadm.go:319] [bootstrap-token] Using token: d17y4k.5oks848f61dz75lb
	I1124 09:06:33.238015  712609 out.go:252]   - Configuring RBAC rules ...
	I1124 09:06:33.238150  712609 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 09:06:33.240584  712609 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 09:06:33.245621  712609 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 09:06:33.247952  712609 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 09:06:33.251093  712609 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 09:06:33.253507  712609 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 09:06:33.601539  712609 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 09:06:34.016941  712609 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 09:06:34.602603  712609 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 09:06:34.603507  712609 kubeadm.go:319] 
	I1124 09:06:34.603600  712609 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 09:06:34.603615  712609 kubeadm.go:319] 
	I1124 09:06:34.603724  712609 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 09:06:34.603743  712609 kubeadm.go:319] 
	I1124 09:06:34.603765  712609 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 09:06:34.603864  712609 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 09:06:34.603941  712609 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 09:06:34.603950  712609 kubeadm.go:319] 
	I1124 09:06:34.604020  712609 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 09:06:34.604028  712609 kubeadm.go:319] 
	I1124 09:06:34.604085  712609 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 09:06:34.604093  712609 kubeadm.go:319] 
	I1124 09:06:34.604169  712609 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 09:06:34.604279  712609 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 09:06:34.604381  712609 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 09:06:34.604388  712609 kubeadm.go:319] 
	I1124 09:06:34.604520  712609 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 09:06:34.604605  712609 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 09:06:34.604620  712609 kubeadm.go:319] 
	I1124 09:06:34.604694  712609 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token d17y4k.5oks848f61dz75lb \
	I1124 09:06:34.604791  712609 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be \
	I1124 09:06:34.604825  712609 kubeadm.go:319] 	--control-plane 
	I1124 09:06:34.604832  712609 kubeadm.go:319] 
	I1124 09:06:34.604926  712609 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 09:06:34.604934  712609 kubeadm.go:319] 
	I1124 09:06:34.605025  712609 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token d17y4k.5oks848f61dz75lb \
	I1124 09:06:34.605148  712609 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:058f105135414f3c09406a88ceaaa8a4946b8fa5ee02b1189df823d65cc738be 
	I1124 09:06:34.607652  712609 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 09:06:34.607774  712609 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 09:06:34.607803  712609 cni.go:84] Creating CNI manager for ""
	I1124 09:06:34.607817  712609 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:06:34.609642  712609 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 09:06:31.881862  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:31.882338  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:31.882394  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:31.882445  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:31.909213  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:31.909236  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:31.909240  685562 cri.go:89] found id: ""
	I1124 09:06:31.909247  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:31.909291  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:31.913329  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:31.917041  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:31.917093  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:31.943024  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:31.943044  685562 cri.go:89] found id: ""
	I1124 09:06:31.943051  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:31.943103  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:31.947092  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:31.947162  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:31.973577  685562 cri.go:89] found id: ""
	I1124 09:06:31.973599  685562 logs.go:282] 0 containers: []
	W1124 09:06:31.973607  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:31.973613  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:31.973658  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:31.999230  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:31.999254  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:31.999258  685562 cri.go:89] found id: ""
	I1124 09:06:31.999266  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:31.999311  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:32.003300  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:32.006900  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:32.006964  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:32.031766  685562 cri.go:89] found id: ""
	I1124 09:06:32.031793  685562 logs.go:282] 0 containers: []
	W1124 09:06:32.031803  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:32.031810  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:32.031873  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:32.059502  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:32.059525  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:32.059530  685562 cri.go:89] found id: ""
	I1124 09:06:32.059537  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:32.059582  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:32.063421  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:32.067085  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:32.067142  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:32.092390  685562 cri.go:89] found id: ""
	I1124 09:06:32.092412  685562 logs.go:282] 0 containers: []
	W1124 09:06:32.092419  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:32.092428  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:32.092509  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:32.117763  685562 cri.go:89] found id: ""
	I1124 09:06:32.117789  685562 logs.go:282] 0 containers: []
	W1124 09:06:32.117797  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:32.117807  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:32.117818  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:32.150083  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:32.150110  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:32.183530  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:32.183564  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:32.217026  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:32.217054  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:32.296676  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:32.296708  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:32.323952  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:32.323979  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:32.349365  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:32.349389  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:32.393026  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:32.393053  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:32.422866  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:32.422894  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:32.436533  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:32.436560  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:32.491046  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:32.491072  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:32.491085  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:32.521289  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:32.521315  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	W1124 09:06:31.634821  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:33.635206  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:34.610765  712609 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 09:06:34.615266  712609 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1124 09:06:34.615285  712609 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 09:06:34.628934  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 09:06:34.828829  712609 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 09:06:34.828867  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:34.828926  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-841285 minikube.k8s.io/updated_at=2025_11_24T09_06_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811 minikube.k8s.io/name=embed-certs-841285 minikube.k8s.io/primary=true
	I1124 09:06:34.840509  712609 ops.go:34] apiserver oom_adj: -16
	I1124 09:06:34.904266  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:35.404241  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:35.905248  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:36.405025  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:36.904407  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:37.404570  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1124 09:06:35.054174  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:37.054257  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:35.054831  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:35.055205  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:35.055268  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:35.055326  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:35.083391  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:35.083409  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:35.083413  685562 cri.go:89] found id: ""
	I1124 09:06:35.083421  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:35.083510  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.087566  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.091809  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:35.091863  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:35.118108  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:35.118127  685562 cri.go:89] found id: ""
	I1124 09:06:35.118136  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:35.118198  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.122294  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:35.122370  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:35.148804  685562 cri.go:89] found id: ""
	I1124 09:06:35.148824  685562 logs.go:282] 0 containers: []
	W1124 09:06:35.148832  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:35.148837  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:35.148882  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:35.175511  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:35.175534  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:35.175539  685562 cri.go:89] found id: ""
	I1124 09:06:35.175549  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:35.175604  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.179432  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.182990  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:35.183047  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:35.208209  685562 cri.go:89] found id: ""
	I1124 09:06:35.208229  685562 logs.go:282] 0 containers: []
	W1124 09:06:35.208242  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:35.208248  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:35.208294  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:35.234429  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:35.234455  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:35.234506  685562 cri.go:89] found id: ""
	I1124 09:06:35.234515  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:35.234561  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.238390  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:35.241907  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:35.241961  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:35.269120  685562 cri.go:89] found id: ""
	I1124 09:06:35.269139  685562 logs.go:282] 0 containers: []
	W1124 09:06:35.269151  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:35.269158  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:35.269205  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:35.294592  685562 cri.go:89] found id: ""
	I1124 09:06:35.294615  685562 logs.go:282] 0 containers: []
	W1124 09:06:35.294624  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:35.294637  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:35.294650  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:35.338717  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:35.338746  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:35.369496  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:35.369531  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:35.400289  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:35.400316  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:35.436787  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:35.436819  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:35.473996  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:35.474023  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:35.500945  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:35.500968  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:35.536390  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:35.536420  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:35.620833  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:35.620877  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:35.637934  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:35.637967  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:35.698091  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:35.698115  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:35.698133  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:35.727855  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:35.727886  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:38.263143  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:38.263700  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:38.263765  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:38.263829  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:38.292856  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:38.292878  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:38.292883  685562 cri.go:89] found id: ""
	I1124 09:06:38.292891  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:38.292948  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.297143  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.301133  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:38.301199  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:38.328125  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:38.328156  685562 cri.go:89] found id: ""
	I1124 09:06:38.328169  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:38.328229  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.332380  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:38.332445  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:38.358808  685562 cri.go:89] found id: ""
	I1124 09:06:38.358835  685562 logs.go:282] 0 containers: []
	W1124 09:06:38.358846  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:38.358854  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:38.358919  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:38.385012  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:38.385037  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:38.385042  685562 cri.go:89] found id: ""
	I1124 09:06:38.385050  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:38.385112  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.389205  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.392855  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:38.392906  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:38.419726  685562 cri.go:89] found id: ""
	I1124 09:06:38.419758  685562 logs.go:282] 0 containers: []
	W1124 09:06:38.419770  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:38.419778  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:38.419836  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:38.449557  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:38.449576  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:38.449579  685562 cri.go:89] found id: ""
	I1124 09:06:38.449588  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:38.449635  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.454052  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:38.458515  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:38.458573  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:38.487500  685562 cri.go:89] found id: ""
	I1124 09:06:38.487529  685562 logs.go:282] 0 containers: []
	W1124 09:06:38.487540  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:38.487549  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:38.487614  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:38.514178  685562 cri.go:89] found id: ""
	I1124 09:06:38.514204  685562 logs.go:282] 0 containers: []
	W1124 09:06:38.514212  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:38.514223  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:38.514233  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:38.574230  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:38.574271  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:38.574290  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:38.618314  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:38.618352  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:38.649077  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:38.649113  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:38.687707  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:38.687738  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:38.731520  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:38.731563  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:38.816355  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:38.816394  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:38.848420  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:38.848447  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:38.883348  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:38.883378  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:38.918351  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:38.918392  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:38.948723  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:38.948764  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:38.985359  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:38.985389  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:37.905005  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:38.405201  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:38.904881  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:39.404418  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:39.905009  712609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 09:06:39.971615  712609 kubeadm.go:1114] duration metric: took 5.142792682s to wait for elevateKubeSystemPrivileges
	I1124 09:06:39.971652  712609 kubeadm.go:403] duration metric: took 17.539316867s to StartCluster
	I1124 09:06:39.971677  712609 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:39.971761  712609 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:06:39.974117  712609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:06:39.974376  712609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 09:06:39.974397  712609 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:06:39.974479  712609 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:06:39.974582  712609 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-841285"
	I1124 09:06:39.974603  712609 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-841285"
	I1124 09:06:39.974635  712609 host.go:66] Checking if "embed-certs-841285" exists ...
	I1124 09:06:39.974658  712609 config.go:182] Loaded profile config "embed-certs-841285": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:06:39.974783  712609 addons.go:70] Setting default-storageclass=true in profile "embed-certs-841285"
	I1124 09:06:39.974821  712609 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-841285"
	I1124 09:06:39.975105  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:39.975155  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:39.980273  712609 out.go:179] * Verifying Kubernetes components...
	I1124 09:06:39.981373  712609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:06:40.002669  712609 addons.go:239] Setting addon default-storageclass=true in "embed-certs-841285"
	I1124 09:06:40.002703  712609 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 09:06:40.002722  712609 host.go:66] Checking if "embed-certs-841285" exists ...
	I1124 09:06:40.003218  712609 cli_runner.go:164] Run: docker container inspect embed-certs-841285 --format={{.State.Status}}
	I1124 09:06:40.004007  712609 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:40.004029  712609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:06:40.004085  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:40.031263  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:40.033666  712609 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:40.033688  712609 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:06:40.033756  712609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-841285
	I1124 09:06:40.055874  712609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/embed-certs-841285/id_rsa Username:docker}
	I1124 09:06:40.076508  712609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 09:06:40.128368  712609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:06:40.151264  712609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:06:40.174106  712609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:06:40.246855  712609 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 09:06:40.249725  712609 node_ready.go:35] waiting up to 6m0s for node "embed-certs-841285" to be "Ready" ...
	I1124 09:06:40.462156  712609 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 09:06:36.134701  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:38.634083  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	I1124 09:06:40.463087  712609 addons.go:530] duration metric: took 488.631539ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 09:06:40.752073  712609 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-841285" context rescaled to 1 replicas
	W1124 09:06:42.252637  712609 node_ready.go:57] node "embed-certs-841285" has "Ready":"False" status (will retry)
	W1124 09:06:39.054512  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:41.554718  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:41.500869  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:41.501361  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:41.501432  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:41.501525  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:41.529135  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:41.529157  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:41.529162  685562 cri.go:89] found id: ""
	I1124 09:06:41.529170  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:41.529217  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.533428  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.537312  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:41.537378  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:41.565599  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:41.565621  685562 cri.go:89] found id: ""
	I1124 09:06:41.565631  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:41.565677  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.569790  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:41.569850  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:41.596873  685562 cri.go:89] found id: ""
	I1124 09:06:41.596902  685562 logs.go:282] 0 containers: []
	W1124 09:06:41.596910  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:41.596918  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:41.596982  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:41.623993  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:41.624016  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:41.624023  685562 cri.go:89] found id: ""
	I1124 09:06:41.624034  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:41.624092  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.628556  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.633200  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:41.633273  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:41.662861  685562 cri.go:89] found id: ""
	I1124 09:06:41.662887  685562 logs.go:282] 0 containers: []
	W1124 09:06:41.662898  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:41.662906  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:41.662971  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:41.690938  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:41.690959  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:41.690964  685562 cri.go:89] found id: ""
	I1124 09:06:41.690972  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:41.691024  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.695206  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:41.699275  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:41.699354  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:41.726057  685562 cri.go:89] found id: ""
	I1124 09:06:41.726084  685562 logs.go:282] 0 containers: []
	W1124 09:06:41.726093  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:41.726102  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:41.726160  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:41.753859  685562 cri.go:89] found id: ""
	I1124 09:06:41.753884  685562 logs.go:282] 0 containers: []
	W1124 09:06:41.753895  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:41.753908  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:41.753923  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:41.813479  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:41.813506  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:41.813530  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:41.848937  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:41.848968  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:41.878521  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:41.878548  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:41.913216  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:41.913249  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:41.940651  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:41.940681  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:41.985818  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:41.985863  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:42.070550  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:42.070588  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:42.103179  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:42.103207  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:42.135695  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:42.135723  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:42.167693  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:42.167721  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:42.199176  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:42.199214  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:44.714754  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:44.715204  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:44.715275  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:44.715339  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:44.742930  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:44.742954  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:44.742960  685562 cri.go:89] found id: ""
	I1124 09:06:44.742970  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:44.743020  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.747098  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.750940  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:44.751001  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:44.777988  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:44.778009  685562 cri.go:89] found id: ""
	I1124 09:06:44.778018  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:44.778072  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.781793  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:44.781851  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:44.807424  685562 cri.go:89] found id: ""
	I1124 09:06:44.807454  685562 logs.go:282] 0 containers: []
	W1124 09:06:44.807478  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:44.807496  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:44.807554  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:44.833894  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:44.833917  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:44.833923  685562 cri.go:89] found id: ""
	I1124 09:06:44.833932  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:44.833991  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.837845  685562 ssh_runner.go:195] Run: which crictl
	W1124 09:06:41.134407  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:43.633885  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:44.253048  712609 node_ready.go:57] node "embed-certs-841285" has "Ready":"False" status (will retry)
	W1124 09:06:46.753243  712609 node_ready.go:57] node "embed-certs-841285" has "Ready":"False" status (will retry)
	W1124 09:06:43.554785  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	W1124 09:06:46.054013  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:44.841712  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:44.841768  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:44.867127  685562 cri.go:89] found id: ""
	I1124 09:06:44.867152  685562 logs.go:282] 0 containers: []
	W1124 09:06:44.867163  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:44.867171  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:44.867226  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:44.893139  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:44.893161  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:44.893165  685562 cri.go:89] found id: ""
	I1124 09:06:44.893173  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:44.893225  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.897049  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:44.900623  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:44.900689  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:44.928422  685562 cri.go:89] found id: ""
	I1124 09:06:44.928453  685562 logs.go:282] 0 containers: []
	W1124 09:06:44.928478  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:44.928493  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:44.928555  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:44.955528  685562 cri.go:89] found id: ""
	I1124 09:06:44.955553  685562 logs.go:282] 0 containers: []
	W1124 09:06:44.955562  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:44.955572  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:44.955585  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:44.969974  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:44.970010  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:45.027796  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:45.027825  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:45.027844  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:45.059560  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:45.059589  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:45.091480  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:45.091510  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:45.119118  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:45.119148  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:45.151248  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:45.151276  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:45.182411  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:45.182439  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:45.226121  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:45.226153  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:45.310078  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:45.310107  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:45.342167  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:45.342197  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:45.369846  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:45.369882  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:47.899244  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:47.899692  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:47.899758  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:47.899824  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:47.929105  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:47.929131  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:47.929138  685562 cri.go:89] found id: ""
	I1124 09:06:47.929148  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:47.929208  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:47.933441  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:47.937325  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:47.937388  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:47.963580  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:47.963607  685562 cri.go:89] found id: ""
	I1124 09:06:47.963617  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:47.963690  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:47.968101  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:47.968172  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:47.996024  685562 cri.go:89] found id: ""
	I1124 09:06:47.996048  685562 logs.go:282] 0 containers: []
	W1124 09:06:47.996056  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:47.996065  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:47.996125  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:48.023413  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:48.023433  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:48.023436  685562 cri.go:89] found id: ""
	I1124 09:06:48.023445  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:48.023525  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:48.027692  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:48.031318  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:48.031395  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:48.059181  685562 cri.go:89] found id: ""
	I1124 09:06:48.059208  685562 logs.go:282] 0 containers: []
	W1124 09:06:48.059219  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:48.059227  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:48.059296  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:48.086294  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:48.086321  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:48.086327  685562 cri.go:89] found id: ""
	I1124 09:06:48.086335  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:48.086400  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:48.090814  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:48.095211  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:48.095280  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:48.122901  685562 cri.go:89] found id: ""
	I1124 09:06:48.122927  685562 logs.go:282] 0 containers: []
	W1124 09:06:48.122939  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:48.122949  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:48.123005  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:48.151342  685562 cri.go:89] found id: ""
	I1124 09:06:48.151383  685562 logs.go:282] 0 containers: []
	W1124 09:06:48.151393  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:48.151404  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:48.151418  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:48.193607  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:48.193643  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:48.226364  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:48.226398  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:48.283581  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:48.283600  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:48.283613  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:48.316978  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:48.317022  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:48.350934  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:48.350963  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:48.385233  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:48.385264  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:48.413799  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:48.413827  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:48.446876  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:48.446904  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:48.526939  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:48.526971  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:48.541619  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:48.541656  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:48.573404  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:48.573436  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	W1124 09:06:48.054454  710410 pod_ready.go:104] pod "coredns-7d764666f9-b6dpn" is not "Ready", error: <nil>
	I1124 09:06:49.554189  710410 pod_ready.go:94] pod "coredns-7d764666f9-b6dpn" is "Ready"
	I1124 09:06:49.554221  710410 pod_ready.go:86] duration metric: took 33.505424734s for pod "coredns-7d764666f9-b6dpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.556706  710410 pod_ready.go:83] waiting for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.560364  710410 pod_ready.go:94] pod "etcd-no-preload-820576" is "Ready"
	I1124 09:06:49.560384  710410 pod_ready.go:86] duration metric: took 3.657273ms for pod "etcd-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.562524  710410 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.566017  710410 pod_ready.go:94] pod "kube-apiserver-no-preload-820576" is "Ready"
	I1124 09:06:49.566036  710410 pod_ready.go:86] duration metric: took 3.49074ms for pod "kube-apiserver-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.567748  710410 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.752582  710410 pod_ready.go:94] pod "kube-controller-manager-no-preload-820576" is "Ready"
	I1124 09:06:49.752618  710410 pod_ready.go:86] duration metric: took 184.846641ms for pod "kube-controller-manager-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:49.952635  710410 pod_ready.go:83] waiting for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:50.353864  710410 pod_ready.go:94] pod "kube-proxy-vz24l" is "Ready"
	I1124 09:06:50.353965  710410 pod_ready.go:86] duration metric: took 401.30197ms for pod "kube-proxy-vz24l" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:50.551947  710410 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:50.953035  710410 pod_ready.go:94] pod "kube-scheduler-no-preload-820576" is "Ready"
	I1124 09:06:50.953063  710410 pod_ready.go:86] duration metric: took 401.089529ms for pod "kube-scheduler-no-preload-820576" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:50.953079  710410 pod_ready.go:40] duration metric: took 34.907713729s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:51.000066  710410 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:06:51.001724  710410 out.go:179] * Done! kubectl is now configured to use "no-preload-820576" cluster and "default" namespace by default
	W1124 09:06:46.136663  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:48.634477  709503 pod_ready.go:104] pod "coredns-5dd5756b68-vxxnm" is not "Ready", error: <nil>
	W1124 09:06:49.253434  712609 node_ready.go:57] node "embed-certs-841285" has "Ready":"False" status (will retry)
	I1124 09:06:51.253119  712609 node_ready.go:49] node "embed-certs-841285" is "Ready"
	I1124 09:06:51.253147  712609 node_ready.go:38] duration metric: took 11.003373653s for node "embed-certs-841285" to be "Ready" ...
	I1124 09:06:51.253162  712609 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:06:51.253205  712609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:06:51.267104  712609 api_server.go:72] duration metric: took 11.292674054s to wait for apiserver process to appear ...
	I1124 09:06:51.267131  712609 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:06:51.267149  712609 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 09:06:51.271589  712609 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 09:06:51.272757  712609 api_server.go:141] control plane version: v1.34.2
	I1124 09:06:51.272785  712609 api_server.go:131] duration metric: took 5.647123ms to wait for apiserver health ...
	I1124 09:06:51.272795  712609 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:06:51.276409  712609 system_pods.go:59] 8 kube-system pods found
	I1124 09:06:51.276447  712609 system_pods.go:61] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:51.276470  712609 system_pods.go:61] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:51.276479  712609 system_pods.go:61] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:51.276491  712609 system_pods.go:61] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:51.276501  712609 system_pods.go:61] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:51.276506  712609 system_pods.go:61] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:51.276519  712609 system_pods.go:61] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:51.276557  712609 system_pods.go:61] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:51.276569  712609 system_pods.go:74] duration metric: took 3.768489ms to wait for pod list to return data ...
	I1124 09:06:51.276577  712609 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:06:51.279038  712609 default_sa.go:45] found service account: "default"
	I1124 09:06:51.279060  712609 default_sa.go:55] duration metric: took 2.474985ms for default service account to be created ...
	I1124 09:06:51.279068  712609 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:06:51.282183  712609 system_pods.go:86] 8 kube-system pods found
	I1124 09:06:51.282218  712609 system_pods.go:89] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:51.282227  712609 system_pods.go:89] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:51.282235  712609 system_pods.go:89] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:51.282241  712609 system_pods.go:89] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:51.282247  712609 system_pods.go:89] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:51.282251  712609 system_pods.go:89] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:51.282257  712609 system_pods.go:89] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:51.282264  712609 system_pods.go:89] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:51.282297  712609 retry.go:31] will retry after 197.083401ms: missing components: kube-dns
	I1124 09:06:51.482726  712609 system_pods.go:86] 8 kube-system pods found
	I1124 09:06:51.482756  712609 system_pods.go:89] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:51.482761  712609 system_pods.go:89] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:51.482767  712609 system_pods.go:89] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:51.482771  712609 system_pods.go:89] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:51.482775  712609 system_pods.go:89] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:51.482778  712609 system_pods.go:89] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:51.482782  712609 system_pods.go:89] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:51.482786  712609 system_pods.go:89] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:51.482801  712609 retry.go:31] will retry after 362.97691ms: missing components: kube-dns
	I1124 09:06:51.850095  712609 system_pods.go:86] 8 kube-system pods found
	I1124 09:06:51.850126  712609 system_pods.go:89] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:06:51.850132  712609 system_pods.go:89] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:51.850138  712609 system_pods.go:89] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:51.850142  712609 system_pods.go:89] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:51.850148  712609 system_pods.go:89] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:51.850151  712609 system_pods.go:89] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:51.850156  712609 system_pods.go:89] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:51.850170  712609 system_pods.go:89] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:06:51.850192  712609 retry.go:31] will retry after 480.664538ms: missing components: kube-dns
	I1124 09:06:52.335518  712609 system_pods.go:86] 8 kube-system pods found
	I1124 09:06:52.335548  712609 system_pods.go:89] "coredns-66bc5c9577-pj9dj" [aeb3ca53-e377-4bb6-ac0b-0d30d279be3f] Running
	I1124 09:06:52.335557  712609 system_pods.go:89] "etcd-embed-certs-841285" [5f3336ea-e36d-4b8f-a6de-c1e595b2961e] Running
	I1124 09:06:52.335562  712609 system_pods.go:89] "kindnet-vx768" [1815dcaa-34e5-492f-9cc5-89725e8bdd87] Running
	I1124 09:06:52.335567  712609 system_pods.go:89] "kube-apiserver-embed-certs-841285" [b0ac5705-f9a9-4fea-8af8-c5d77c7f74ed] Running
	I1124 09:06:52.335573  712609 system_pods.go:89] "kube-controller-manager-embed-certs-841285" [fc1170ed-2663-4ce9-8828-d57be6b82592] Running
	I1124 09:06:52.335578  712609 system_pods.go:89] "kube-proxy-fnp4m" [27a9ad80-225d-4155-82db-5c9e2b99d56c] Running
	I1124 09:06:52.335584  712609 system_pods.go:89] "kube-scheduler-embed-certs-841285" [92d4a46c-4456-426c-a51f-59702108ba5f] Running
	I1124 09:06:52.335588  712609 system_pods.go:89] "storage-provisioner" [a842c350-8d9a-4e1c-a3d6-286e8dd975f8] Running
	I1124 09:06:52.335599  712609 system_pods.go:126] duration metric: took 1.056524192s to wait for k8s-apps to be running ...
	I1124 09:06:52.335610  712609 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:06:52.335668  712609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:06:52.348782  712609 system_svc.go:56] duration metric: took 13.164048ms WaitForService to wait for kubelet
	I1124 09:06:52.348806  712609 kubeadm.go:587] duration metric: took 12.374379771s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:06:52.348823  712609 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:06:52.351516  712609 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:06:52.351546  712609 node_conditions.go:123] node cpu capacity is 8
	I1124 09:06:52.351563  712609 node_conditions.go:105] duration metric: took 2.735404ms to run NodePressure ...
	I1124 09:06:52.351581  712609 start.go:242] waiting for startup goroutines ...
	I1124 09:06:52.351595  712609 start.go:247] waiting for cluster config update ...
	I1124 09:06:52.351612  712609 start.go:256] writing updated cluster config ...
	I1124 09:06:52.351933  712609 ssh_runner.go:195] Run: rm -f paused
	I1124 09:06:52.355685  712609 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:52.359005  712609 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pj9dj" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.362797  712609 pod_ready.go:94] pod "coredns-66bc5c9577-pj9dj" is "Ready"
	I1124 09:06:52.362820  712609 pod_ready.go:86] duration metric: took 3.79319ms for pod "coredns-66bc5c9577-pj9dj" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.364555  712609 pod_ready.go:83] waiting for pod "etcd-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.367975  712609 pod_ready.go:94] pod "etcd-embed-certs-841285" is "Ready"
	I1124 09:06:52.367994  712609 pod_ready.go:86] duration metric: took 3.418324ms for pod "etcd-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.369845  712609 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.373364  712609 pod_ready.go:94] pod "kube-apiserver-embed-certs-841285" is "Ready"
	I1124 09:06:52.373385  712609 pod_ready.go:86] duration metric: took 3.516894ms for pod "kube-apiserver-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.375033  712609 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.134280  709503 pod_ready.go:94] pod "coredns-5dd5756b68-vxxnm" is "Ready"
	I1124 09:06:51.134309  709503 pod_ready.go:86] duration metric: took 37.505689734s for pod "coredns-5dd5756b68-vxxnm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.137872  709503 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.143048  709503 pod_ready.go:94] pod "etcd-old-k8s-version-128377" is "Ready"
	I1124 09:06:51.143074  709503 pod_ready.go:86] duration metric: took 5.175259ms for pod "etcd-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.146283  709503 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.151227  709503 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-128377" is "Ready"
	I1124 09:06:51.151255  709503 pod_ready.go:86] duration metric: took 4.946885ms for pod "kube-apiserver-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.154486  709503 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.333825  709503 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-128377" is "Ready"
	I1124 09:06:51.333851  709503 pod_ready.go:86] duration metric: took 179.341709ms for pod "kube-controller-manager-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.535398  709503 pod_ready.go:83] waiting for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:51.933689  709503 pod_ready.go:94] pod "kube-proxy-fpbs2" is "Ready"
	I1124 09:06:51.933722  709503 pod_ready.go:86] duration metric: took 398.293307ms for pod "kube-proxy-fpbs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.133891  709503 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.533145  709503 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-128377" is "Ready"
	I1124 09:06:52.533173  709503 pod_ready.go:86] duration metric: took 399.255408ms for pod "kube-scheduler-old-k8s-version-128377" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.533185  709503 pod_ready.go:40] duration metric: took 38.910563367s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:52.577376  709503 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 09:06:52.578870  709503 out.go:203] 
	W1124 09:06:52.579914  709503 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 09:06:52.580924  709503 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 09:06:52.581923  709503 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-128377" cluster and "default" namespace by default
	I1124 09:06:52.759728  712609 pod_ready.go:94] pod "kube-controller-manager-embed-certs-841285" is "Ready"
	I1124 09:06:52.759755  712609 pod_ready.go:86] duration metric: took 384.703934ms for pod "kube-controller-manager-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:52.959669  712609 pod_ready.go:83] waiting for pod "kube-proxy-fnp4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:53.359988  712609 pod_ready.go:94] pod "kube-proxy-fnp4m" is "Ready"
	I1124 09:06:53.360015  712609 pod_ready.go:86] duration metric: took 400.321858ms for pod "kube-proxy-fnp4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:53.560301  712609 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:53.959937  712609 pod_ready.go:94] pod "kube-scheduler-embed-certs-841285" is "Ready"
	I1124 09:06:53.959964  712609 pod_ready.go:86] duration metric: took 399.640947ms for pod "kube-scheduler-embed-certs-841285" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:06:53.959975  712609 pod_ready.go:40] duration metric: took 1.604258428s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:06:54.004555  712609 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:06:54.006291  712609 out.go:179] * Done! kubectl is now configured to use "embed-certs-841285" cluster and "default" namespace by default
	I1124 09:06:51.101685  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:51.102112  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:51.102174  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:51.102227  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:51.135040  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:51.135065  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:51.135071  685562 cri.go:89] found id: ""
	I1124 09:06:51.135081  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:51.135148  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.140404  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.144856  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:51.144940  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:51.180635  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:51.180660  685562 cri.go:89] found id: ""
	I1124 09:06:51.180673  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:51.180732  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.187022  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:51.187093  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:51.215838  685562 cri.go:89] found id: ""
	I1124 09:06:51.215863  685562 logs.go:282] 0 containers: []
	W1124 09:06:51.215871  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:51.215877  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:51.215933  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:51.244066  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:51.244094  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:51.244100  685562 cri.go:89] found id: ""
	I1124 09:06:51.244109  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:51.244178  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.248240  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.252274  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:51.252342  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:51.285805  685562 cri.go:89] found id: ""
	I1124 09:06:51.285828  685562 logs.go:282] 0 containers: []
	W1124 09:06:51.285838  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:51.285847  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:51.285906  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:51.323489  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:51.323527  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:51.323533  685562 cri.go:89] found id: ""
	I1124 09:06:51.323543  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:51.323604  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.328663  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:51.333540  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:51.333610  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:51.362894  685562 cri.go:89] found id: ""
	I1124 09:06:51.362922  685562 logs.go:282] 0 containers: []
	W1124 09:06:51.362932  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:51.362941  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:51.363008  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:51.394531  685562 cri.go:89] found id: ""
	I1124 09:06:51.394556  685562 logs.go:282] 0 containers: []
	W1124 09:06:51.394566  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:51.394580  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:51.394599  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:51.475738  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:51.475775  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:51.491643  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:51.491678  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:51.532760  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:51.532799  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:51.569840  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:51.569885  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:51.614611  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:51.614657  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:51.649935  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:51.649970  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:51.697040  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:51.697082  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:51.758985  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:51.759012  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:51.759029  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:51.791554  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:51.791583  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:51.826807  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:51.826843  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:51.870472  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:51.870507  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:54.404826  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:54.405255  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:54.405323  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:54.405386  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:54.433970  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:54.433998  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:54.434003  685562 cri.go:89] found id: ""
	I1124 09:06:54.434012  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:54.434075  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.438414  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.442166  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:54.442238  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:54.468667  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:54.468694  685562 cri.go:89] found id: ""
	I1124 09:06:54.468706  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:54.468766  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.472777  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:54.472838  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:54.498949  685562 cri.go:89] found id: ""
	I1124 09:06:54.498975  685562 logs.go:282] 0 containers: []
	W1124 09:06:54.498985  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:54.498993  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:54.499054  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:54.529848  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:54.529868  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:54.529871  685562 cri.go:89] found id: ""
	I1124 09:06:54.529879  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:54.529940  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.534397  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.538638  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:54.538709  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:54.567281  685562 cri.go:89] found id: ""
	I1124 09:06:54.567310  685562 logs.go:282] 0 containers: []
	W1124 09:06:54.567322  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:54.567332  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:54.567386  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:54.596806  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:54.596836  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:54.596843  685562 cri.go:89] found id: ""
	I1124 09:06:54.596853  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:54.596914  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.601444  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:54.605871  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:54.605941  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:54.633262  685562 cri.go:89] found id: ""
	I1124 09:06:54.633287  685562 logs.go:282] 0 containers: []
	W1124 09:06:54.633295  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:54.633301  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:54.633350  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:54.660983  685562 cri.go:89] found id: ""
	I1124 09:06:54.661010  685562 logs.go:282] 0 containers: []
	W1124 09:06:54.661020  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:54.661034  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:54.661060  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:54.695211  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:54.695242  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:54.738087  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:54.738118  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:54.768628  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:54.768660  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:54.851230  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:54.851260  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:54.882690  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:54.882718  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:54.915991  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:54.916021  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:54.943256  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:54.943281  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:54.969234  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:54.969270  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:55.001750  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:55.001784  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:55.015657  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:55.015687  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:55.072493  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:55.072512  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:55.072531  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:57.607270  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:06:57.607779  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:06:57.607836  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:06:57.607903  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:06:57.638496  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:57.638521  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:57.638525  685562 cri.go:89] found id: ""
	I1124 09:06:57.638533  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:06:57.638588  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.642977  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.646554  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:06:57.646625  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:06:57.676323  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:57.676353  685562 cri.go:89] found id: ""
	I1124 09:06:57.676364  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:06:57.676426  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.680991  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:06:57.681061  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:06:57.707542  685562 cri.go:89] found id: ""
	I1124 09:06:57.707573  685562 logs.go:282] 0 containers: []
	W1124 09:06:57.707584  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:06:57.707592  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:06:57.707650  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:06:57.737756  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:57.737782  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:06:57.737788  685562 cri.go:89] found id: ""
	I1124 09:06:57.737798  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:06:57.737860  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.742071  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.745921  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:06:57.745994  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:06:57.775084  685562 cri.go:89] found id: ""
	I1124 09:06:57.775108  685562 logs.go:282] 0 containers: []
	W1124 09:06:57.775119  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:06:57.775128  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:06:57.775200  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:06:57.803547  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:57.803575  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:57.803580  685562 cri.go:89] found id: ""
	I1124 09:06:57.803592  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:06:57.803656  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.808035  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:06:57.811815  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:06:57.811877  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:06:57.838909  685562 cri.go:89] found id: ""
	I1124 09:06:57.838941  685562 logs.go:282] 0 containers: []
	W1124 09:06:57.838953  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:06:57.838961  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:06:57.839023  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:06:57.867727  685562 cri.go:89] found id: ""
	I1124 09:06:57.867752  685562 logs.go:282] 0 containers: []
	W1124 09:06:57.867765  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:06:57.867778  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:06:57.867794  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:06:57.902109  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:06:57.902140  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:06:57.954496  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:06:57.954531  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:06:58.040359  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:06:58.040394  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:06:58.103496  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:06:58.103527  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:06:58.103541  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:06:58.135471  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:06:58.135503  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:06:58.165443  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:06:58.165510  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:06:58.196093  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:06:58.196119  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:06:58.227441  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:06:58.227488  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:06:58.241918  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:06:58.241949  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:06:58.275785  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:06:58.275819  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:06:58.308006  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:06:58.308038  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:00.843510  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:00.843943  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:00.843997  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:00.844048  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:00.871376  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:07:00.871402  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:00.871409  685562 cri.go:89] found id: ""
	I1124 09:07:00.871418  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:00.871495  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:00.875484  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:00.879875  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:00.879945  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:00.908292  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:00.908314  685562 cri.go:89] found id: ""
	I1124 09:07:00.908322  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:00.908370  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:00.912598  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:00.912674  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:00.940116  685562 cri.go:89] found id: ""
	I1124 09:07:00.940140  685562 logs.go:282] 0 containers: []
	W1124 09:07:00.940150  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:00.940159  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:00.940221  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:00.969359  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:00.969389  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:00.969394  685562 cri.go:89] found id: ""
	I1124 09:07:00.969402  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:00.969479  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:00.973784  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:00.977637  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:00.977697  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:01.004278  685562 cri.go:89] found id: ""
	I1124 09:07:01.004300  685562 logs.go:282] 0 containers: []
	W1124 09:07:01.004307  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:01.004314  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:01.004370  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:01.030843  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:07:01.030868  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:01.030874  685562 cri.go:89] found id: ""
	I1124 09:07:01.030885  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:01.030939  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:01.034825  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:01.038531  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:01.038597  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:01.066620  685562 cri.go:89] found id: ""
	I1124 09:07:01.066645  685562 logs.go:282] 0 containers: []
	W1124 09:07:01.066655  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:01.066663  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:01.066727  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:01.094693  685562 cri.go:89] found id: ""
	I1124 09:07:01.094722  685562 logs.go:282] 0 containers: []
	W1124 09:07:01.094731  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:01.094745  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:01.094762  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:01.108685  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:07:01.108710  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:07:01.140568  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:01.140595  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:01.173899  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:01.173927  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:01.202584  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:07:01.202609  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:07:01.230307  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:01.230341  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:01.271269  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:01.271296  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:01.302414  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:01.302440  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:07:01.383595  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:01.383629  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:01.441216  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:01.441235  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:01.441258  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:01.475129  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:01.475160  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:01.510336  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:01.510371  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:04.045562  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:04.045982  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:04.046039  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:04.046094  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:04.077500  685562 cri.go:89] found id: "161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:07:04.077518  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:04.077524  685562 cri.go:89] found id: ""
	I1124 09:07:04.077533  685562 logs.go:282] 2 containers: [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:04.077588  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:04.082318  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:04.086229  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:04.086292  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:04.124363  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:04.124386  685562 cri.go:89] found id: ""
	I1124 09:07:04.124397  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:04.124560  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:04.129005  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:04.129082  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:04.160298  685562 cri.go:89] found id: ""
	I1124 09:07:04.160325  685562 logs.go:282] 0 containers: []
	W1124 09:07:04.160337  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:04.160345  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:04.160404  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:04.192903  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:04.192930  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:04.192937  685562 cri.go:89] found id: ""
	I1124 09:07:04.192946  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:04.193002  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:04.197513  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:04.202109  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:04.202224  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:04.234591  685562 cri.go:89] found id: ""
	I1124 09:07:04.234622  685562 logs.go:282] 0 containers: []
	W1124 09:07:04.234635  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:04.234644  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:04.234704  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:04.266614  685562 cri.go:89] found id: "8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:07:04.266636  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:04.266641  685562 cri.go:89] found id: ""
	I1124 09:07:04.266651  685562 logs.go:282] 2 containers: [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:04.266700  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:04.271650  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:04.275764  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:04.275821  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:04.309939  685562 cri.go:89] found id: ""
	I1124 09:07:04.309967  685562 logs.go:282] 0 containers: []
	W1124 09:07:04.309976  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:04.309984  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:04.310062  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:04.345857  685562 cri.go:89] found id: ""
	I1124 09:07:04.345882  685562 logs.go:282] 0 containers: []
	W1124 09:07:04.345890  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:04.345901  685562 logs.go:123] Gathering logs for kube-controller-manager [8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e] ...
	I1124 09:07:04.345916  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8da222183fceb0040c993f0d3d9c85678c249dac2b110af18f3fa96f8a22cb0e"
	I1124 09:07:04.383161  685562 logs.go:123] Gathering logs for kube-apiserver [161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9] ...
	I1124 09:07:04.383193  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 161269ab5c64e557b48659380c563a80efa6d5e0dd59e56d75cb836526a396c9"
	I1124 09:07:04.429033  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:04.429173  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:04.473683  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:04.473715  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:04.506085  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:04.506115  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:04.544742  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:04.544773  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:04.595895  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:04.595928  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:04.632887  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:04.632920  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:07:04.740089  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:04.740134  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:04.757907  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:04.757947  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:04.825669  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:04.825698  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:04.825712  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2794c60f1b87d       56cc512116c8f       10 seconds ago      Running             busybox                   0                   cd6e9dd958e1b       busybox                                      default
	5791bcd31b139       52546a367cc9e       15 seconds ago      Running             coredns                   0                   cea257d400b5b       coredns-66bc5c9577-pj9dj                     kube-system
	bb014e8f46371       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   b387d8741a385       storage-provisioner                          kube-system
	70e7d5014d73f       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   7fb43b0ba3148       kindnet-vx768                                kube-system
	aceceb2c284ef       8aa150647e88a       26 seconds ago      Running             kube-proxy                0                   6555090d7ce71       kube-proxy-fnp4m                             kube-system
	d97d24cf8d340       88320b5498ff2       37 seconds ago      Running             kube-scheduler            0                   cba101b3a6b17       kube-scheduler-embed-certs-841285            kube-system
	2ce09b161b5c2       01e8bacf0f500       37 seconds ago      Running             kube-controller-manager   0                   7ea2f34b1722b       kube-controller-manager-embed-certs-841285   kube-system
	f898005685984       a5f569d49a979       37 seconds ago      Running             kube-apiserver            0                   66c80159a2c1b       kube-apiserver-embed-certs-841285            kube-system
	6d95f1561bf17       a3e246e9556e9       37 seconds ago      Running             etcd                      0                   c492c3650c4f1       etcd-embed-certs-841285                      kube-system
	
	
	==> containerd <==
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.601195355Z" level=info msg="CreateContainer within sandbox \"b387d8741a385e01b5c7a73e98f42bf5db21a510fac5123e093fe5421dec8fad\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e\""
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.601846385Z" level=info msg="StartContainer for \"bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e\""
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.602954294Z" level=info msg="connecting to shim bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e" address="unix:///run/containerd/s/27bf57dc6ceb1e46fc50df6038dd3da7382d463a39b8580b6eb4b11174d68acb" protocol=ttrpc version=3
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.605099421Z" level=info msg="Container 5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.610766750Z" level=info msg="CreateContainer within sandbox \"cea257d400b5bb22db6a66b2ebfbc367de9158d5780269a913335780361d1c8c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29\""
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.611340066Z" level=info msg="StartContainer for \"5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29\""
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.612365214Z" level=info msg="connecting to shim 5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29" address="unix:///run/containerd/s/a45e379d451fef72676ccc0f1406be396cadcd8bf5f03b5dc3c8b6207502e546" protocol=ttrpc version=3
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.661054178Z" level=info msg="StartContainer for \"5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29\" returns successfully"
	Nov 24 09:06:51 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:51.661111729Z" level=info msg="StartContainer for \"bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e\" returns successfully"
	Nov 24 09:06:54 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:54.497340066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b0e3c418-2bd8-4d22-8f34-07ae172f4007,Namespace:default,Attempt:0,}"
	Nov 24 09:06:54 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:54.525772270Z" level=info msg="connecting to shim cd6e9dd958e1b877fa364c95cd9afc0cd535d0bca4b2783f855f90f353695930" address="unix:///run/containerd/s/2147e4cab68b4dde9e2aa772b84a3fd7aabb7c0044d0ee461b0ddf18a05ff541" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 09:06:54 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:54.598675142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b0e3c418-2bd8-4d22-8f34-07ae172f4007,Namespace:default,Attempt:0,} returns sandbox id \"cd6e9dd958e1b877fa364c95cd9afc0cd535d0bca4b2783f855f90f353695930\""
	Nov 24 09:06:54 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:54.600749884Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.866538093Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.867018469Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396648"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.868020689Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.869926297Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.870347246Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.269554675s"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.870396814Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.874354233Z" level=info msg="CreateContainer within sandbox \"cd6e9dd958e1b877fa364c95cd9afc0cd535d0bca4b2783f855f90f353695930\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.880517848Z" level=info msg="Container 2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.885938620Z" level=info msg="CreateContainer within sandbox \"cd6e9dd958e1b877fa364c95cd9afc0cd535d0bca4b2783f855f90f353695930\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073\""
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.886543743Z" level=info msg="StartContainer for \"2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073\""
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.888055152Z" level=info msg="connecting to shim 2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073" address="unix:///run/containerd/s/2147e4cab68b4dde9e2aa772b84a3fd7aabb7c0044d0ee461b0ddf18a05ff541" protocol=ttrpc version=3
	Nov 24 09:06:56 embed-certs-841285 containerd[664]: time="2025-11-24T09:06:56.956158525Z" level=info msg="StartContainer for \"2794c60f1b87dd413e19014dcba2972de5f1a47c7fca91d3886c78dac452b073\" returns successfully"
	
	
	==> coredns [5791bcd31b139a067d22096d1c802834a688cce871829ade2568ef2c21c27c29] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42241 - 35548 "HINFO IN 8163729340161881770.3044721224429617214. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033273972s
	
	
	==> describe nodes <==
	Name:               embed-certs-841285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-841285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=embed-certs-841285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_06_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-841285
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:07:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:07:04 +0000   Mon, 24 Nov 2025 09:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:07:04 +0000   Mon, 24 Nov 2025 09:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:07:04 +0000   Mon, 24 Nov 2025 09:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:07:04 +0000   Mon, 24 Nov 2025 09:06:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-841285
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ebc07106-33bb-498a-bebe-7072c74c7486
	  Boot ID:                    f052cd47-57de-4521-b5fb-139979fdced9
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-pj9dj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-embed-certs-841285                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-vx768                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-841285             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-embed-certs-841285    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-fnp4m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-841285             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node embed-certs-841285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node embed-certs-841285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x7 over 38s)  kubelet          Node embed-certs-841285 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node embed-certs-841285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node embed-certs-841285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node embed-certs-841285 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node embed-certs-841285 event: Registered Node embed-certs-841285 in Controller
	  Normal  NodeReady                16s                kubelet          Node embed-certs-841285 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [6d95f1561bf17dce61ba80d159dea00411b59b2a76b869e85c4db0b747e6e052] <==
	{"level":"warn","ts":"2025-11-24T09:06:31.039076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.046575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.058493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.063120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.071121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.079615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.086948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.093637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.099924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.106225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.119544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.132929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.146561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.153181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.159133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.168097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.176030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.182287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.188508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.194669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.200971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.208713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.214933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.231700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:06:31.237642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56952","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:07:07 up  3:49,  0 user,  load average: 3.73, 3.61, 10.25
	Linux embed-certs-841285 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [70e7d5014d73fb61f0d19dd479c539b45ebfacffc4d3a9a9e0dbc8e25a4ff258] <==
	I1124 09:06:40.842614       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:06:40.842888       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 09:06:40.843044       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:06:40.843068       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:06:40.843102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:06:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:06:41.044928       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:06:41.044994       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:06:41.045304       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:06:41.045371       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:06:41.442524       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:06:41.442574       1 metrics.go:72] Registering metrics
	I1124 09:06:41.442686       1 controller.go:711] "Syncing nftables rules"
	I1124 09:06:51.047556       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:06:51.047636       1 main.go:301] handling current node
	I1124 09:07:01.046751       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 09:07:01.046784       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f898005685984dc4556869a93c75316cdf14d3c6467c0e990707fdb33212bf16] <==
	I1124 09:06:31.741773       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:06:31.744875       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 09:06:31.746219       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:06:31.746263       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:06:31.754918       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:06:31.755769       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:06:31.932098       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:06:32.644786       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:06:32.648495       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:06:32.648514       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:06:33.062909       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:06:33.096718       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:06:33.147952       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:06:33.153388       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 09:06:33.154337       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:06:33.158558       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:06:33.669791       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:06:34.006361       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:06:34.016072       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:06:34.023841       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:06:38.672578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 09:06:39.621392       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:06:39.722996       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:06:39.726414       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 09:07:03.288166       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:58428: use of closed network connection
	
	
	==> kube-controller-manager [2ce09b161b5c24b322e72a291e6d0c4e6fff790b91ca66e60518ed811ec018de] <==
	I1124 09:06:38.649143       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-841285" podCIDRs=["10.244.0.0/24"]
	I1124 09:06:38.669149       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 09:06:38.669170       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:06:38.669190       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 09:06:38.669257       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 09:06:38.669276       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 09:06:38.669294       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:06:38.669314       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 09:06:38.669335       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 09:06:38.669261       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:06:38.669369       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 09:06:38.669368       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 09:06:38.669441       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-841285"
	I1124 09:06:38.669532       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 09:06:38.669591       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 09:06:38.669672       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 09:06:38.669955       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 09:06:38.670056       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 09:06:38.670092       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 09:06:38.670155       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:06:38.670372       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:06:38.672210       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:06:38.676913       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:06:38.698131       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:06:53.689391       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [aceceb2c284ef07de874eba9caa9408bb0f88b56e8227e343a08ec26fb375bf7] <==
	I1124 09:06:40.335788       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:06:40.406183       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:06:40.507249       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:06:40.507284       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 09:06:40.507401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:06:40.532334       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:06:40.532404       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:06:40.538247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:06:40.538649       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:06:40.538677       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:06:40.540090       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:06:40.540110       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:06:40.540226       1 config.go:200] "Starting service config controller"
	I1124 09:06:40.540298       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:06:40.540391       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:06:40.540271       1 config.go:309] "Starting node config controller"
	I1124 09:06:40.540996       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:06:40.541006       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:06:40.540317       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:06:40.640235       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 09:06:40.641447       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:06:40.641453       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d97d24cf8d340628a9581ff5edc0ea87945c6edffba8606d442b1e4884d4e7f2] <==
	E1124 09:06:31.695663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:06:31.695702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:06:31.695841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:06:31.695867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:06:31.695924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:06:31.695948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:06:31.696010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:06:31.696091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:06:31.696098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:06:31.695993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:06:31.696215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 09:06:31.696495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:06:31.696524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:06:31.696743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:06:32.543383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:06:32.564651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:06:32.631555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:06:32.669643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:06:32.741498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:06:32.780408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:06:32.798896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:06:32.878930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:06:32.915329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:06:33.029945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 09:06:35.492553       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: E1124 09:06:34.867929    1454 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-841285\" already exists" pod="kube-system/kube-scheduler-embed-certs-841285"
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: I1124 09:06:34.881371    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-841285" podStartSLOduration=1.8813263839999999 podStartE2EDuration="1.881326384s" podCreationTimestamp="2025-11-24 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:34.881122775 +0000 UTC m=+1.116853867" watchObservedRunningTime="2025-11-24 09:06:34.881326384 +0000 UTC m=+1.117057470"
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: I1124 09:06:34.899370    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-841285" podStartSLOduration=1.899347068 podStartE2EDuration="1.899347068s" podCreationTimestamp="2025-11-24 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:34.889972597 +0000 UTC m=+1.125703687" watchObservedRunningTime="2025-11-24 09:06:34.899347068 +0000 UTC m=+1.135078156"
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: I1124 09:06:34.906717    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-841285" podStartSLOduration=1.906697591 podStartE2EDuration="1.906697591s" podCreationTimestamp="2025-11-24 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:34.899526679 +0000 UTC m=+1.135257767" watchObservedRunningTime="2025-11-24 09:06:34.906697591 +0000 UTC m=+1.142428662"
	Nov 24 09:06:34 embed-certs-841285 kubelet[1454]: I1124 09:06:34.906882    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-841285" podStartSLOduration=1.906872854 podStartE2EDuration="1.906872854s" podCreationTimestamp="2025-11-24 09:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:34.906868153 +0000 UTC m=+1.142599233" watchObservedRunningTime="2025-11-24 09:06:34.906872854 +0000 UTC m=+1.142603943"
	Nov 24 09:06:38 embed-certs-841285 kubelet[1454]: I1124 09:06:38.689859    1454 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:06:38 embed-certs-841285 kubelet[1454]: I1124 09:06:38.690619    1454 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670542    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a9ad80-225d-4155-82db-5c9e2b99d56c-xtables-lock\") pod \"kube-proxy-fnp4m\" (UID: \"27a9ad80-225d-4155-82db-5c9e2b99d56c\") " pod="kube-system/kube-proxy-fnp4m"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670590    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a9ad80-225d-4155-82db-5c9e2b99d56c-lib-modules\") pod \"kube-proxy-fnp4m\" (UID: \"27a9ad80-225d-4155-82db-5c9e2b99d56c\") " pod="kube-system/kube-proxy-fnp4m"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670617    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v489d\" (UniqueName: \"kubernetes.io/projected/27a9ad80-225d-4155-82db-5c9e2b99d56c-kube-api-access-v489d\") pod \"kube-proxy-fnp4m\" (UID: \"27a9ad80-225d-4155-82db-5c9e2b99d56c\") " pod="kube-system/kube-proxy-fnp4m"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670658    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1815dcaa-34e5-492f-9cc5-89725e8bdd87-cni-cfg\") pod \"kindnet-vx768\" (UID: \"1815dcaa-34e5-492f-9cc5-89725e8bdd87\") " pod="kube-system/kindnet-vx768"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670690    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1815dcaa-34e5-492f-9cc5-89725e8bdd87-xtables-lock\") pod \"kindnet-vx768\" (UID: \"1815dcaa-34e5-492f-9cc5-89725e8bdd87\") " pod="kube-system/kindnet-vx768"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670713    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1815dcaa-34e5-492f-9cc5-89725e8bdd87-lib-modules\") pod \"kindnet-vx768\" (UID: \"1815dcaa-34e5-492f-9cc5-89725e8bdd87\") " pod="kube-system/kindnet-vx768"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670736    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ht4h\" (UniqueName: \"kubernetes.io/projected/1815dcaa-34e5-492f-9cc5-89725e8bdd87-kube-api-access-2ht4h\") pod \"kindnet-vx768\" (UID: \"1815dcaa-34e5-492f-9cc5-89725e8bdd87\") " pod="kube-system/kindnet-vx768"
	Nov 24 09:06:39 embed-certs-841285 kubelet[1454]: I1124 09:06:39.670792    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27a9ad80-225d-4155-82db-5c9e2b99d56c-kube-proxy\") pod \"kube-proxy-fnp4m\" (UID: \"27a9ad80-225d-4155-82db-5c9e2b99d56c\") " pod="kube-system/kube-proxy-fnp4m"
	Nov 24 09:06:40 embed-certs-841285 kubelet[1454]: I1124 09:06:40.895017    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vx768" podStartSLOduration=1.894996549 podStartE2EDuration="1.894996549s" podCreationTimestamp="2025-11-24 09:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:40.885621976 +0000 UTC m=+7.121353064" watchObservedRunningTime="2025-11-24 09:06:40.894996549 +0000 UTC m=+7.130727638"
	Nov 24 09:06:40 embed-certs-841285 kubelet[1454]: I1124 09:06:40.903392    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fnp4m" podStartSLOduration=1.9033737990000001 podStartE2EDuration="1.903373799s" podCreationTimestamp="2025-11-24 09:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:40.894962969 +0000 UTC m=+7.130694058" watchObservedRunningTime="2025-11-24 09:06:40.903373799 +0000 UTC m=+7.139104893"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.149563    1454 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.258701    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqds\" (UniqueName: \"kubernetes.io/projected/a842c350-8d9a-4e1c-a3d6-286e8dd975f8-kube-api-access-2jqds\") pod \"storage-provisioner\" (UID: \"a842c350-8d9a-4e1c-a3d6-286e8dd975f8\") " pod="kube-system/storage-provisioner"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.258767    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a842c350-8d9a-4e1c-a3d6-286e8dd975f8-tmp\") pod \"storage-provisioner\" (UID: \"a842c350-8d9a-4e1c-a3d6-286e8dd975f8\") " pod="kube-system/storage-provisioner"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.258797    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb3ca53-e377-4bb6-ac0b-0d30d279be3f-config-volume\") pod \"coredns-66bc5c9577-pj9dj\" (UID: \"aeb3ca53-e377-4bb6-ac0b-0d30d279be3f\") " pod="kube-system/coredns-66bc5c9577-pj9dj"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.258819    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bthj\" (UniqueName: \"kubernetes.io/projected/aeb3ca53-e377-4bb6-ac0b-0d30d279be3f-kube-api-access-8bthj\") pod \"coredns-66bc5c9577-pj9dj\" (UID: \"aeb3ca53-e377-4bb6-ac0b-0d30d279be3f\") " pod="kube-system/coredns-66bc5c9577-pj9dj"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.912441    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pj9dj" podStartSLOduration=12.912418824 podStartE2EDuration="12.912418824s" podCreationTimestamp="2025-11-24 09:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:51.912282127 +0000 UTC m=+18.148013218" watchObservedRunningTime="2025-11-24 09:06:51.912418824 +0000 UTC m=+18.148149913"
	Nov 24 09:06:51 embed-certs-841285 kubelet[1454]: I1124 09:06:51.921130    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.921107228 podStartE2EDuration="11.921107228s" podCreationTimestamp="2025-11-24 09:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:51.921073137 +0000 UTC m=+18.156804227" watchObservedRunningTime="2025-11-24 09:06:51.921107228 +0000 UTC m=+18.156838320"
	Nov 24 09:06:54 embed-certs-841285 kubelet[1454]: I1124 09:06:54.276244    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjpn\" (UniqueName: \"kubernetes.io/projected/b0e3c418-2bd8-4d22-8f34-07ae172f4007-kube-api-access-jgjpn\") pod \"busybox\" (UID: \"b0e3c418-2bd8-4d22-8f34-07ae172f4007\") " pod="default/busybox"
	
	
	==> storage-provisioner [bb014e8f4637159e636d0a426c87a05841ddeac54ecd7c79319307dddaca5a7e] <==
	I1124 09:06:51.672130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:06:51.683385       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:06:51.683455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:06:51.686218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:51.692635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:06:51.692810       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:06:51.693030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b385703d-3f7e-47f3-bebb-4b78081f4b4c", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-841285_94699be9-2ddd-4f62-90d1-da0627f35948 became leader
	I1124 09:06:51.693655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-841285_94699be9-2ddd-4f62-90d1-da0627f35948!
	W1124 09:06:51.695986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:51.701008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:06:51.794574       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-841285_94699be9-2ddd-4f62-90d1-da0627f35948!
	W1124 09:06:53.704271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:53.708195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:55.711233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:55.714861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:57.718183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:57.722689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:59.726145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:06:59.730018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:01.733153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:01.736844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:03.741288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:03.746015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:05.749662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:05.755375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-841285 -n embed-certs-841285
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-841285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-603918 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7] Pending
helpers_test.go:352: "busybox" [4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003627548s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-603918 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-603918
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-603918:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d",
	        "Created": "2025-11-24T09:07:14.844491638Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 730402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:07:14.880864922Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d/hosts",
	        "LogPath": "/var/lib/docker/containers/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d-json.log",
	        "Name": "/default-k8s-diff-port-603918",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-603918:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-603918",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d",
	                "LowerDir": "/var/lib/docker/overlay2/4eae6b16079bac56ef36203e4e58682749e5349afe43e78bc4493341a6fdab7b-init/diff:/var/lib/docker/overlay2/a062700147ad5d1f8f2a68f70ba6ad34ea6495dd365bcb265ab17ea27961837b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4eae6b16079bac56ef36203e4e58682749e5349afe43e78bc4493341a6fdab7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4eae6b16079bac56ef36203e4e58682749e5349afe43e78bc4493341a6fdab7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4eae6b16079bac56ef36203e4e58682749e5349afe43e78bc4493341a6fdab7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-603918",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-603918/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-603918",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-603918",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-603918",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "95e5aae672169464be761abd76dd20b5159df26b725f234432872b2158f40b29",
	            "SandboxKey": "/var/run/docker/netns/95e5aae67216",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-603918": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6224c7b85f0c971619d603b2dfeda75632e2c76d5aeff59e17162534427abf2e",
	                    "EndpointID": "86e0de6394802760d5a713ac2fc670727f1106744eaf7883f423e4f351f91ca9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "9a:2b:82:f9:c8:7c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-603918",
	                        "1f53ec028167"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-603918 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-603918 logs -n 25: (1.154378745s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ ssh     │ -p kubenet-203355 sudo cat /etc/hosts                                                                                                                                                                                                                      │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo cat /etc/resolv.conf                                                                                                                                                                                                                │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo crictl pods                                                                                                                                                                                                                         │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo crictl ps --all                                                                                                                                                                                                                     │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                              │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo ip a s                                                                                                                                                                                                                              │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo ip r s                                                                                                                                                                                                                              │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo iptables-save                                                                                                                                                                                                                       │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo iptables -t nat -L -n -v                                                                                                                                                                                                            │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo systemctl status kubelet --all --full --no-pager                                                                                                                                                                                    │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo systemctl cat kubelet --no-pager                                                                                                                                                                                                    │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                                     │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                                    │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                                    │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ start   │ -p default-k8s-diff-port-603918 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-603918 │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ start   │ -p newest-cni-654569 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-841285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-841285           │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ start   │ -p embed-certs-841285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-841285           │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-654569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ stop    │ -p newest-cni-654569 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-654569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ start   │ -p newest-cni-654569 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ image   │ newest-cni-654569 image list --format=json                                                                                                                                                                                                                 │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:08 UTC │
	│ pause   │ -p newest-cni-654569 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:08 UTC │ 24 Nov 25 09:08 UTC │
	│ unpause │ -p newest-cni-654569 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:08 UTC │ 24 Nov 25 09:08 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:07:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:07:48.085172  740119 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:07:48.085422  740119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:07:48.085431  740119 out.go:374] Setting ErrFile to fd 2...
	I1124 09:07:48.085435  740119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:07:48.085654  740119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:07:48.086098  740119 out.go:368] Setting JSON to false
	I1124 09:07:48.087476  740119 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13804,"bootTime":1763961464,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:07:48.087538  740119 start.go:143] virtualization: kvm guest
	I1124 09:07:48.089341  740119 out.go:179] * [newest-cni-654569] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:07:48.090342  740119 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:07:48.090357  740119 notify.go:221] Checking for updates...
	I1124 09:07:48.092506  740119 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:07:48.093570  740119 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:07:48.094577  740119 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:07:48.095525  740119 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:07:48.096560  740119 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:07:48.098935  740119 config.go:182] Loaded profile config "newest-cni-654569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:07:48.099441  740119 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:07:48.123883  740119 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:07:48.123985  740119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:07:48.180131  740119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:07:48.170292777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:07:48.180255  740119 docker.go:319] overlay module found
	I1124 09:07:48.181756  740119 out.go:179] * Using the docker driver based on existing profile
	I1124 09:07:48.182725  740119 start.go:309] selected driver: docker
	I1124 09:07:48.182739  740119 start.go:927] validating driver "docker" against &{Name:newest-cni-654569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:07:48.182835  740119 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:07:48.183414  740119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:07:48.245922  740119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:07:48.236204951 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:07:48.246244  740119 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 09:07:48.246289  740119 cni.go:84] Creating CNI manager for ""
	I1124 09:07:48.246353  740119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:07:48.246408  740119 start.go:353] cluster config:
	{Name:newest-cni-654569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:07:48.248109  740119 out.go:179] * Starting "newest-cni-654569" primary control-plane node in "newest-cni-654569" cluster
	I1124 09:07:48.249170  740119 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 09:07:48.250253  740119 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:07:48.251335  740119 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:07:48.251399  740119 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:07:48.272221  740119 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:07:48.272245  740119 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:07:48.359770  740119 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	W1124 09:07:48.393606  740119 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I1124 09:07:48.393800  740119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/config.json ...
	I1124 09:07:48.393944  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:48.394108  740119 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:07:48.394156  740119 start.go:360] acquireMachinesLock for newest-cni-654569: {Name:mk77a4f7dd1c44df67b8fabeed9184a8f376f91c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:48.394277  740119 start.go:364] duration metric: took 68.815µs to acquireMachinesLock for "newest-cni-654569"
	I1124 09:07:48.394301  740119 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:07:48.394308  740119 fix.go:54] fixHost starting: 
	I1124 09:07:48.394636  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:48.414582  740119 fix.go:112] recreateIfNeeded on newest-cni-654569: state=Stopped err=<nil>
	W1124 09:07:48.414626  740119 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:07:46.193942  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:46.194408  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:46.194525  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:46.194585  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:46.240281  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:46.240304  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:46.240310  685562 cri.go:89] found id: ""
	I1124 09:07:46.240319  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:46.240383  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.245436  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.250118  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:46.250185  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:46.280616  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:46.280638  685562 cri.go:89] found id: ""
	I1124 09:07:46.280650  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:46.280714  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.285684  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:46.285748  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:46.318854  685562 cri.go:89] found id: ""
	I1124 09:07:46.318885  685562 logs.go:282] 0 containers: []
	W1124 09:07:46.318898  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:46.319198  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:46.319291  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:46.365180  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:46.365209  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:46.365215  685562 cri.go:89] found id: ""
	I1124 09:07:46.365227  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:46.365285  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.370948  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.376120  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:46.376278  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:46.408937  685562 cri.go:89] found id: ""
	I1124 09:07:46.408967  685562 logs.go:282] 0 containers: []
	W1124 09:07:46.408978  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:46.408987  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:46.409050  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:46.439842  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:46.439865  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:46.439871  685562 cri.go:89] found id: ""
	I1124 09:07:46.439880  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:46.439941  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.444872  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.449213  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:46.449282  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:46.483632  685562 cri.go:89] found id: ""
	I1124 09:07:46.483668  685562 logs.go:282] 0 containers: []
	W1124 09:07:46.483681  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:46.483690  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:46.483751  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:46.515552  685562 cri.go:89] found id: ""
	I1124 09:07:46.515583  685562 logs.go:282] 0 containers: []
	W1124 09:07:46.515595  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:46.515661  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:46.515691  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:46.530847  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:46.530884  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:46.594391  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:46.594420  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:46.594439  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:46.631540  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:46.631571  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:46.670437  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:46.670479  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:46.709947  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:07:46.709980  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:46.741928  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:46.741957  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:46.785347  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:46.785378  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:46.819216  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:46.819246  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:07:46.913672  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:07:46.913715  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:46.948732  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:46.948764  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:46.978072  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:46.978099  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:49.524951  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:49.525424  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:49.525516  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:49.525571  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:49.553150  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:49.553170  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:49.553173  685562 cri.go:89] found id: ""
	I1124 09:07:49.553181  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:49.553234  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.557530  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.561623  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:49.561685  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:49.588210  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:49.588230  685562 cri.go:89] found id: ""
	I1124 09:07:49.588248  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:49.588308  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.592320  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:49.592401  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:49.619927  685562 cri.go:89] found id: ""
	I1124 09:07:49.619953  685562 logs.go:282] 0 containers: []
	W1124 09:07:49.619961  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:49.619968  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:49.620024  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:49.646508  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:49.646528  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:49.646532  685562 cri.go:89] found id: ""
	I1124 09:07:49.646539  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:49.646588  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.650850  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.654694  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:49.654752  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:49.682858  685562 cri.go:89] found id: ""
	I1124 09:07:49.682891  685562 logs.go:282] 0 containers: []
	W1124 09:07:49.682903  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:49.682911  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:49.682982  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:49.710107  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:49.710134  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:49.710140  685562 cri.go:89] found id: ""
	I1124 09:07:49.710150  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:49.710225  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.714861  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.718812  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:49.718872  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:49.746561  685562 cri.go:89] found id: ""
	I1124 09:07:49.746593  685562 logs.go:282] 0 containers: []
	W1124 09:07:49.746606  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:49.746615  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:49.746669  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:49.774674  685562 cri.go:89] found id: ""
	I1124 09:07:49.774699  685562 logs.go:282] 0 containers: []
	W1124 09:07:49.774707  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:49.774717  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:49.774731  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1124 09:07:47.211797  728321 node_ready.go:57] node "default-k8s-diff-port-603918" has "Ready":"False" status (will retry)
	W1124 09:07:49.710953  728321 node_ready.go:57] node "default-k8s-diff-port-603918" has "Ready":"False" status (will retry)
	I1124 09:07:50.211800  728321 node_ready.go:49] node "default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:50.211830  728321 node_ready.go:38] duration metric: took 11.503977315s for node "default-k8s-diff-port-603918" to be "Ready" ...
	I1124 09:07:50.211847  728321 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:07:50.211891  728321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:07:50.225299  728321 api_server.go:72] duration metric: took 11.802560258s to wait for apiserver process to appear ...
	I1124 09:07:50.225333  728321 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:07:50.225370  728321 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:07:50.230797  728321 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 09:07:50.231792  728321 api_server.go:141] control plane version: v1.34.2
	I1124 09:07:50.231821  728321 api_server.go:131] duration metric: took 6.479948ms to wait for apiserver health ...
	I1124 09:07:50.231834  728321 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:07:50.234788  728321 system_pods.go:59] 8 kube-system pods found
	I1124 09:07:50.234838  728321 system_pods.go:61] "coredns-66bc5c9577-xrvmp" [33252e00-03f6-4116-98b4-ffd795b3bce8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:07:50.234851  728321 system_pods.go:61] "etcd-default-k8s-diff-port-603918" [48914200-8900-4bb2-abe0-83dda320f67c] Running
	I1124 09:07:50.234864  728321 system_pods.go:61] "kindnet-b9gr6" [53f892c9-f95c-488d-886b-87b4d981b058] Running
	I1124 09:07:50.234870  728321 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-603918" [fd7c4392-7b1f-49b7-ae71-c3d85585a4bb] Running
	I1124 09:07:50.234876  728321 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-603918" [7ae71128-323b-4d75-9716-2911dfc3eff1] Running
	I1124 09:07:50.234882  728321 system_pods.go:61] "kube-proxy-5hvkq" [66cc3c18-98b4-47fa-a69c-90041bacd287] Running
	I1124 09:07:50.234888  728321 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-603918" [33d67c96-b92a-4ebb-a850-62f5984bf88b] Running
	I1124 09:07:50.234897  728321 system_pods.go:61] "storage-provisioner" [1081180d-32ee-417f-aea3-ba27c3ee7c30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:07:50.234909  728321 system_pods.go:74] duration metric: took 3.067184ms to wait for pod list to return data ...
	I1124 09:07:50.234922  728321 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:07:50.237471  728321 default_sa.go:45] found service account: "default"
	I1124 09:07:50.237497  728321 default_sa.go:55] duration metric: took 2.56863ms for default service account to be created ...
	I1124 09:07:50.237507  728321 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:07:50.240092  728321 system_pods.go:86] 8 kube-system pods found
	I1124 09:07:50.240131  728321 system_pods.go:89] "coredns-66bc5c9577-xrvmp" [33252e00-03f6-4116-98b4-ffd795b3bce8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:07:50.240141  728321 system_pods.go:89] "etcd-default-k8s-diff-port-603918" [48914200-8900-4bb2-abe0-83dda320f67c] Running
	I1124 09:07:50.240158  728321 system_pods.go:89] "kindnet-b9gr6" [53f892c9-f95c-488d-886b-87b4d981b058] Running
	I1124 09:07:50.240164  728321 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-603918" [fd7c4392-7b1f-49b7-ae71-c3d85585a4bb] Running
	I1124 09:07:50.240170  728321 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-603918" [7ae71128-323b-4d75-9716-2911dfc3eff1] Running
	I1124 09:07:50.240182  728321 system_pods.go:89] "kube-proxy-5hvkq" [66cc3c18-98b4-47fa-a69c-90041bacd287] Running
	I1124 09:07:50.240186  728321 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-603918" [33d67c96-b92a-4ebb-a850-62f5984bf88b] Running
	I1124 09:07:50.240196  728321 system_pods.go:89] "storage-provisioner" [1081180d-32ee-417f-aea3-ba27c3ee7c30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:07:50.240226  728321 retry.go:31] will retry after 206.653018ms: missing components: kube-dns
	I1124 09:07:50.452255  728321 system_pods.go:86] 8 kube-system pods found
	I1124 09:07:50.452299  728321 system_pods.go:89] "coredns-66bc5c9577-xrvmp" [33252e00-03f6-4116-98b4-ffd795b3bce8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:07:50.452305  728321 system_pods.go:89] "etcd-default-k8s-diff-port-603918" [48914200-8900-4bb2-abe0-83dda320f67c] Running
	I1124 09:07:50.452311  728321 system_pods.go:89] "kindnet-b9gr6" [53f892c9-f95c-488d-886b-87b4d981b058] Running
	I1124 09:07:50.452315  728321 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-603918" [fd7c4392-7b1f-49b7-ae71-c3d85585a4bb] Running
	I1124 09:07:50.452318  728321 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-603918" [7ae71128-323b-4d75-9716-2911dfc3eff1] Running
	I1124 09:07:50.452321  728321 system_pods.go:89] "kube-proxy-5hvkq" [66cc3c18-98b4-47fa-a69c-90041bacd287] Running
	I1124 09:07:50.452325  728321 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-603918" [33d67c96-b92a-4ebb-a850-62f5984bf88b] Running
	I1124 09:07:50.452329  728321 system_pods.go:89] "storage-provisioner" [1081180d-32ee-417f-aea3-ba27c3ee7c30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:07:50.452355  728321 retry.go:31] will retry after 367.625451ms: missing components: kube-dns
	I1124 09:07:50.824329  728321 system_pods.go:86] 8 kube-system pods found
	I1124 09:07:50.824357  728321 system_pods.go:89] "coredns-66bc5c9577-xrvmp" [33252e00-03f6-4116-98b4-ffd795b3bce8] Running
	I1124 09:07:50.824363  728321 system_pods.go:89] "etcd-default-k8s-diff-port-603918" [48914200-8900-4bb2-abe0-83dda320f67c] Running
	I1124 09:07:50.824367  728321 system_pods.go:89] "kindnet-b9gr6" [53f892c9-f95c-488d-886b-87b4d981b058] Running
	I1124 09:07:50.824371  728321 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-603918" [fd7c4392-7b1f-49b7-ae71-c3d85585a4bb] Running
	I1124 09:07:50.824374  728321 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-603918" [7ae71128-323b-4d75-9716-2911dfc3eff1] Running
	I1124 09:07:50.824384  728321 system_pods.go:89] "kube-proxy-5hvkq" [66cc3c18-98b4-47fa-a69c-90041bacd287] Running
	I1124 09:07:50.824388  728321 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-603918" [33d67c96-b92a-4ebb-a850-62f5984bf88b] Running
	I1124 09:07:50.824392  728321 system_pods.go:89] "storage-provisioner" [1081180d-32ee-417f-aea3-ba27c3ee7c30] Running
	I1124 09:07:50.824400  728321 system_pods.go:126] duration metric: took 586.886497ms to wait for k8s-apps to be running ...
	I1124 09:07:50.824412  728321 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:07:50.824490  728321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:07:50.837644  728321 system_svc.go:56] duration metric: took 13.224987ms WaitForService to wait for kubelet
	I1124 09:07:50.837669  728321 kubeadm.go:587] duration metric: took 12.414938686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:07:50.837685  728321 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:07:50.840072  728321 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:07:50.840098  728321 node_conditions.go:123] node cpu capacity is 8
	I1124 09:07:50.840117  728321 node_conditions.go:105] duration metric: took 2.426436ms to run NodePressure ...
	I1124 09:07:50.840133  728321 start.go:242] waiting for startup goroutines ...
	I1124 09:07:50.840147  728321 start.go:247] waiting for cluster config update ...
	I1124 09:07:50.840161  728321 start.go:256] writing updated cluster config ...
	I1124 09:07:50.840487  728321 ssh_runner.go:195] Run: rm -f paused
	I1124 09:07:50.844243  728321 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:07:50.847626  728321 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xrvmp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.851574  728321 pod_ready.go:94] pod "coredns-66bc5c9577-xrvmp" is "Ready"
	I1124 09:07:50.851600  728321 pod_ready.go:86] duration metric: took 3.950663ms for pod "coredns-66bc5c9577-xrvmp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.853329  728321 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.856853  728321 pod_ready.go:94] pod "etcd-default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:50.856873  728321 pod_ready.go:86] duration metric: took 3.526484ms for pod "etcd-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.858612  728321 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.862325  728321 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:50.862346  728321 pod_ready.go:86] duration metric: took 3.715322ms for pod "kube-apiserver-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.863994  728321 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:07:47.324158  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	W1124 09:07:49.324287  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	W1124 09:07:51.824159  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	I1124 09:07:51.248382  728321 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:51.248416  728321 pod_ready.go:86] duration metric: took 384.402391ms for pod "kube-controller-manager-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:51.448446  728321 pod_ready.go:83] waiting for pod "kube-proxy-5hvkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:51.848140  728321 pod_ready.go:94] pod "kube-proxy-5hvkq" is "Ready"
	I1124 09:07:51.848166  728321 pod_ready.go:86] duration metric: took 399.659801ms for pod "kube-proxy-5hvkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:52.049612  728321 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:52.449194  728321 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:52.449217  728321 pod_ready.go:86] duration metric: took 399.576687ms for pod "kube-scheduler-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:52.449234  728321 pod_ready.go:40] duration metric: took 1.604961347s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:07:52.494045  728321 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:07:52.496103  728321 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-603918" cluster and "default" namespace by default
	I1124 09:07:48.416413  740119 out.go:252] * Restarting existing docker container for "newest-cni-654569" ...
	I1124 09:07:48.416505  740119 cli_runner.go:164] Run: docker start newest-cni-654569
	I1124 09:07:48.699338  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:48.719279  740119 kic.go:430] container "newest-cni-654569" state is running.
	I1124 09:07:48.719771  740119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-654569
	I1124 09:07:48.721000  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:48.740378  740119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/config.json ...
	I1124 09:07:48.740650  740119 machine.go:94] provisionDockerMachine start ...
	I1124 09:07:48.740713  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:48.761816  740119 main.go:143] libmachine: Using SSH client type: native
	I1124 09:07:48.762152  740119 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1124 09:07:48.762171  740119 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:07:48.762773  740119 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34874->127.0.0.1:33103: read: connection reset by peer
	I1124 09:07:49.060328  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:49.389088  740119 cache.go:107] acquiring lock: {Name:mkbcabeb5a23ff077ffdad64c71e9fe699d94040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389135  740119 cache.go:107] acquiring lock: {Name:mk7f052905284f586f4f1cf24b8c34cc48e0b85b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389131  740119 cache.go:107] acquiring lock: {Name:mk92c82896924ab47423467b25ccd98ee4128baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389100  740119 cache.go:107] acquiring lock: {Name:mk8023690ce5b18d9a1789b2f878bf92c1381799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389143  740119 cache.go:107] acquiring lock: {Name:mkf3a006b133f81ed32779d427a8d0a9b25f9000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389225  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:07:49.389225  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:07:49.389237  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:07:49.389249  740119 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 113.053µs
	I1124 09:07:49.389253  740119 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 188.199µs
	I1124 09:07:49.389259  740119 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 126.312µs
	I1124 09:07:49.389265  740119 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:07:49.389265  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:07:49.389269  740119 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:07:49.389248  740119 cache.go:107] acquiring lock: {Name:mk1d635b72f6d026600360916178f900a450350e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389284  740119 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 161.366µs
	I1124 09:07:49.389106  740119 cache.go:107] acquiring lock: {Name:mkd74819cb24442927f7fb2cffd47478de40e14c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389296  740119 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:07:49.389272  740119 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:07:49.389287  740119 cache.go:107] acquiring lock: {Name:mk6b573bbd33cfc3c3f77668030fb064598572fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389415  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:07:49.389425  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:07:49.389437  740119 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 258.146µs
	I1124 09:07:49.389445  740119 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 238.909µs
	I1124 09:07:49.389455  740119 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:07:49.389475  740119 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:07:49.389430  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:07:49.389496  740119 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 408.179µs
	I1124 09:07:49.389507  740119 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:07:49.389546  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:07:49.389568  740119 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 469.236µs
	I1124 09:07:49.389578  740119 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:07:49.389595  740119 cache.go:87] Successfully saved all images to host disk.
	I1124 09:07:51.905216  740119 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-654569
	
	I1124 09:07:51.905256  740119 ubuntu.go:182] provisioning hostname "newest-cni-654569"
	I1124 09:07:51.905343  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:51.923076  740119 main.go:143] libmachine: Using SSH client type: native
	I1124 09:07:51.923312  740119 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1124 09:07:51.923327  740119 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-654569 && echo "newest-cni-654569" | sudo tee /etc/hostname
	I1124 09:07:52.074711  740119 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-654569
	
	I1124 09:07:52.074778  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.093045  740119 main.go:143] libmachine: Using SSH client type: native
	I1124 09:07:52.093342  740119 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1124 09:07:52.093370  740119 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-654569' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-654569/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-654569' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:07:52.236140  740119 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:07:52.236193  740119 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:07:52.236221  740119 ubuntu.go:190] setting up certificates
	I1124 09:07:52.236242  740119 provision.go:84] configureAuth start
	I1124 09:07:52.236302  740119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-654569
	I1124 09:07:52.255013  740119 provision.go:143] copyHostCerts
	I1124 09:07:52.255080  740119 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:07:52.255100  740119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:07:52.255181  740119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:07:52.255372  740119 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:07:52.255389  740119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:07:52.255433  740119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:07:52.255544  740119 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:07:52.255554  740119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:07:52.255583  740119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:07:52.255650  740119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.newest-cni-654569 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-654569]
	I1124 09:07:52.306365  740119 provision.go:177] copyRemoteCerts
	I1124 09:07:52.306413  740119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:07:52.306447  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.324740  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.426510  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:07:52.444101  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:07:52.462132  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:07:52.479956  740119 provision.go:87] duration metric: took 243.697789ms to configureAuth
	I1124 09:07:52.479981  740119 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:07:52.480188  740119 config.go:182] Loaded profile config "newest-cni-654569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:07:52.480205  740119 machine.go:97] duration metric: took 3.739539072s to provisionDockerMachine
	I1124 09:07:52.480216  740119 start.go:293] postStartSetup for "newest-cni-654569" (driver="docker")
	I1124 09:07:52.480234  740119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:07:52.480319  740119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:07:52.480368  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.501120  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.607590  740119 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:07:52.611746  740119 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:07:52.611770  740119 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:07:52.611782  740119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:07:52.611845  740119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:07:52.611937  740119 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:07:52.612044  740119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:07:52.619818  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:07:52.639936  740119 start.go:296] duration metric: took 159.699932ms for postStartSetup
	I1124 09:07:52.640022  740119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:07:52.640071  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.663072  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.763563  740119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:07:52.768496  740119 fix.go:56] duration metric: took 4.374175847s for fixHost
	I1124 09:07:52.768522  740119 start.go:83] releasing machines lock for "newest-cni-654569", held for 4.374229582s
	I1124 09:07:52.768590  740119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-654569
	I1124 09:07:52.788989  740119 ssh_runner.go:195] Run: cat /version.json
	I1124 09:07:52.789040  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.789095  740119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:07:52.789155  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.810188  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.810852  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.968219  740119 ssh_runner.go:195] Run: systemctl --version
	I1124 09:07:52.976647  740119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:07:52.982167  740119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:07:52.982248  740119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:07:52.991521  740119 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:07:52.991541  740119 start.go:496] detecting cgroup driver to use...
	I1124 09:07:52.991575  740119 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:07:52.991634  740119 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:07:53.013188  740119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:07:53.027530  740119 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:07:53.027605  740119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:07:53.043401  740119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:07:53.055944  740119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:07:53.138523  740119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:07:53.225021  740119 docker.go:234] disabling docker service ...
	I1124 09:07:53.225088  740119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:07:53.239839  740119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:07:53.253817  740119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:07:53.342228  740119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:07:53.434381  740119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:07:53.448642  740119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:07:53.463981  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:53.781061  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:07:53.791059  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:07:53.800164  740119 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:07:53.800220  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:07:53.809170  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:07:53.817850  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:07:53.827229  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:07:53.835766  740119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:07:53.843728  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:07:53.852452  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:07:53.861172  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:07:53.869750  740119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:07:53.876842  740119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:07:53.884022  740119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:07:53.964111  740119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:07:54.057102  740119 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:07:54.057193  740119 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:07:54.061455  740119 start.go:564] Will wait 60s for crictl version
	I1124 09:07:54.061535  740119 ssh_runner.go:195] Run: which crictl
	I1124 09:07:54.065270  740119 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:07:54.089954  740119 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:07:54.090014  740119 ssh_runner.go:195] Run: containerd --version
	I1124 09:07:54.111497  740119 ssh_runner.go:195] Run: containerd --version
	I1124 09:07:54.135282  740119 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1124 09:07:54.136314  740119 cli_runner.go:164] Run: docker network inspect newest-cni-654569 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:07:54.154057  740119 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:07:54.158283  740119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:07:54.170280  740119 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 09:07:49.857357  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:49.857392  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:49.872170  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:49.872205  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:49.906798  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:49.906829  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:49.944383  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:49.944413  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:49.977121  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:49.977151  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:50.023751  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:50.023790  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:50.092853  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:50.092874  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:07:50.092887  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:50.124349  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:50.124378  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:50.157974  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:50.158005  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:50.186445  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:07:50.186485  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:50.215211  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:50.215240  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:52.750543  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:52.751008  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:52.751076  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:52.751140  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:52.779222  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:52.779253  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:52.779259  685562 cri.go:89] found id: ""
	I1124 09:07:52.779270  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:52.779325  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.783396  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.787381  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:52.787433  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:52.819643  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:52.819665  685562 cri.go:89] found id: ""
	I1124 09:07:52.819675  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:52.819727  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.824397  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:52.824483  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:52.852859  685562 cri.go:89] found id: ""
	I1124 09:07:52.852884  685562 logs.go:282] 0 containers: []
	W1124 09:07:52.852893  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:52.852901  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:52.852958  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:52.880546  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:52.880574  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:52.880581  685562 cri.go:89] found id: ""
	I1124 09:07:52.880596  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:52.880655  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.884728  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.888394  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:52.888449  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:52.913593  685562 cri.go:89] found id: ""
	I1124 09:07:52.913619  685562 logs.go:282] 0 containers: []
	W1124 09:07:52.913629  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:52.913637  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:52.913691  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:52.940155  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:52.940175  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:52.940181  685562 cri.go:89] found id: ""
	I1124 09:07:52.940192  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:52.940249  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.944598  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.948215  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:52.948283  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:52.975415  685562 cri.go:89] found id: ""
	I1124 09:07:52.975443  685562 logs.go:282] 0 containers: []
	W1124 09:07:52.975453  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:52.975491  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:52.975555  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:53.007272  685562 cri.go:89] found id: ""
	I1124 09:07:53.007301  685562 logs.go:282] 0 containers: []
	W1124 09:07:53.007312  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:53.007333  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:53.007347  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:07:53.121553  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:53.121586  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:53.188763  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:53.188783  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:53.188795  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:53.222509  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:53.222540  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:53.250796  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:53.250823  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:53.302451  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:53.302504  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:53.334584  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:53.334613  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:53.349579  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:07:53.349601  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:53.385162  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:53.385192  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:53.418890  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:53.418929  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:53.453244  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:07:53.453269  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:53.481875  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:53.481910  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:54.171385  740119 kubeadm.go:884] updating cluster {Name:newest-cni-654569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:07:54.171609  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:54.484998  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:54.798141  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:55.115412  740119 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:07:55.115492  740119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:07:55.141981  740119 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:07:55.142005  740119 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:07:55.142015  740119 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1124 09:07:55.142138  740119 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-654569 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:07:55.142213  740119 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:07:55.168053  740119 cni.go:84] Creating CNI manager for ""
	I1124 09:07:55.168076  740119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:07:55.168099  740119 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 09:07:55.168136  740119 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-654569 NodeName:newest-cni-654569 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:07:55.168268  740119 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-654569"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:07:55.168345  740119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:07:55.176299  740119 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:07:55.176368  740119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:07:55.184050  740119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1124 09:07:55.197168  740119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:07:55.209710  740119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1124 09:07:55.222578  740119 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:07:55.225988  740119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:07:55.235552  740119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:07:55.313702  740119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:07:55.335543  740119 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569 for IP: 192.168.103.2
	I1124 09:07:55.335565  740119 certs.go:195] generating shared ca certs ...
	I1124 09:07:55.335598  740119 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:07:55.335764  740119 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:07:55.335825  740119 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:07:55.335838  740119 certs.go:257] generating profile certs ...
	I1124 09:07:55.335956  740119 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/client.key
	I1124 09:07:55.336043  740119 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/apiserver.key.7c762e30
	I1124 09:07:55.336093  740119 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/proxy-client.key
	I1124 09:07:55.336234  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:07:55.336298  740119 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:07:55.336312  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:07:55.336362  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:07:55.336411  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:07:55.336441  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:07:55.336501  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:07:55.337131  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:07:55.356448  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:07:55.375062  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:07:55.393674  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:07:55.417631  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:07:55.439443  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:07:55.457653  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:07:55.475347  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:07:55.493913  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:07:55.510946  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:07:55.529348  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:07:55.549329  740119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:07:55.564652  740119 ssh_runner.go:195] Run: openssl version
	I1124 09:07:55.571017  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:07:55.580738  740119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:07:55.584597  740119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:07:55.584654  740119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:07:55.625418  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:07:55.634285  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:07:55.645169  740119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:07:55.649484  740119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:07:55.649544  740119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:07:55.688419  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:07:55.698286  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:07:55.707646  740119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:07:55.711576  740119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:07:55.711628  740119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:07:55.746241  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:07:55.757113  740119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:07:55.761360  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:07:55.796496  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:07:55.833324  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:07:55.871233  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:07:55.928790  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:07:55.981088  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:07:56.038125  740119 kubeadm.go:401] StartCluster: {Name:newest-cni-654569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:07:56.038266  740119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:07:56.038340  740119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:07:56.089186  740119 cri.go:89] found id: "a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d"
	I1124 09:07:56.089214  740119 cri.go:89] found id: "4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2"
	I1124 09:07:56.089219  740119 cri.go:89] found id: "dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720"
	I1124 09:07:56.089225  740119 cri.go:89] found id: "75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b"
	I1124 09:07:56.089229  740119 cri.go:89] found id: "f4e1fceba7711096161d4a95501e91ea1d83cfe4c620e5995126dd9c543b960f"
	I1124 09:07:56.089246  740119 cri.go:89] found id: "3e84b165b0b37fab2be27fc4595dad9d25ec66c3a3f0b546bac1d95f55f60749"
	I1124 09:07:56.089251  740119 cri.go:89] found id: "158de48e001d34e944b0f5bc8cd62e5c78fdfe8edb46bdd955885f2b6b096c38"
	I1124 09:07:56.089255  740119 cri.go:89] found id: "e31cf74acac5f31b3b47fc57578c8eb5620c5f68b51d75b3d896d2fdc6759487"
	I1124 09:07:56.089258  740119 cri.go:89] found id: "a6a092f46c17fe1320efa54d0d748c6d5d89cbc4d13446b32d574312c288c0ff"
	I1124 09:07:56.089267  740119 cri.go:89] found id: ""
	I1124 09:07:56.089316  740119 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 09:07:56.130416  740119 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8","pid":863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8/rootfs","created":"2025-11-24T09:07:55.953337283Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-654569_c6dcb99e56c6b456784e4cc4e4a8aa33","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c6dcb99e56c6b456784e4cc4e4a8aa33"},"owner":"root"},{"ociVersion":"1.2.1","id":"3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef/rootfs","created":"2025-11-24T09:07:55.94638955Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef","io.kubernetes.cr
i.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-654569_d7c5b44497a828ab83d4aadcafefd5cb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d7c5b44497a828ab83d4aadcafefd5cb"},"owner":"root"},{"ociVersion":"1.2.1","id":"4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2","pid":958,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2/rootfs","created":"2025-11-24T09:07:56.065006307Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0","i
o.kubernetes.cri.sandbox-id":"3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d7c5b44497a828ab83d4aadcafefd5cb"},"owner":"root"},{"ociVersion":"1.2.1","id":"75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b","pid":937,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b/rootfs","created":"2025-11-24T09:07:56.048707764Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847","io.kubernetes.cri.s
andbox-name":"etcd-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4bfaebf212f3ea670ce06d699a6f1411"},"owner":"root"},{"ociVersion":"1.2.1","id":"a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d/rootfs","created":"2025-11-24T09:07:56.059491091Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-beta.0","io.kubernetes.cri.sandbox-id":"085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kube
rnetes.cri.sandbox-uid":"c6dcb99e56c6b456784e4cc4e4a8aa33"},"owner":"root"},{"ociVersion":"1.2.1","id":"d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857","pid":824,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857/rootfs","created":"2025-11-24T09:07:55.935948038Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-654569_536efe6b5a7bd07d056d539cdc365e07","io.kubernetes.
cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"536efe6b5a7bd07d056d539cdc365e07"},"owner":"root"},{"ociVersion":"1.2.1","id":"dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720","pid":946,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720/rootfs","created":"2025-11-24T09:07:56.054487864Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-beta.0","io.kubernetes.cri.sandbox-id":"d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-654569","io.ku
bernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"536efe6b5a7bd07d056d539cdc365e07"},"owner":"root"},{"ociVersion":"1.2.1","id":"ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847","pid":810,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847/rootfs","created":"2025-11-24T09:07:55.928063262Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-654569_4bfae
bf212f3ea670ce06d699a6f1411","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4bfaebf212f3ea670ce06d699a6f1411"},"owner":"root"}]
	I1124 09:07:56.130724  740119 cri.go:126] list returned 8 containers
	I1124 09:07:56.130741  740119 cri.go:129] container: {ID:085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8 Status:running}
	I1124 09:07:56.130760  740119 cri.go:131] skipping 085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8 - not in ps
	I1124 09:07:56.130766  740119 cri.go:129] container: {ID:3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef Status:running}
	I1124 09:07:56.130772  740119 cri.go:131] skipping 3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef - not in ps
	I1124 09:07:56.130778  740119 cri.go:129] container: {ID:4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2 Status:running}
	I1124 09:07:56.130797  740119 cri.go:135] skipping {4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2 running}: state = "running", want "paused"
	I1124 09:07:56.130808  740119 cri.go:129] container: {ID:75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b Status:running}
	I1124 09:07:56.130814  740119 cri.go:135] skipping {75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b running}: state = "running", want "paused"
	I1124 09:07:56.130821  740119 cri.go:129] container: {ID:a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d Status:running}
	I1124 09:07:56.130829  740119 cri.go:135] skipping {a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d running}: state = "running", want "paused"
	I1124 09:07:56.130835  740119 cri.go:129] container: {ID:d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857 Status:running}
	I1124 09:07:56.130842  740119 cri.go:131] skipping d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857 - not in ps
	I1124 09:07:56.130849  740119 cri.go:129] container: {ID:dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720 Status:running}
	I1124 09:07:56.130857  740119 cri.go:135] skipping {dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720 running}: state = "running", want "paused"
	I1124 09:07:56.130863  740119 cri.go:129] container: {ID:ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847 Status:running}
	I1124 09:07:56.130871  740119 cri.go:131] skipping ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847 - not in ps
	I1124 09:07:56.130937  740119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:07:56.143034  740119 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:07:56.143057  740119 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:07:56.143107  740119 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:07:56.156401  740119 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:07:56.157955  740119 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-654569" does not appear in /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:07:56.158947  740119 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-435860/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-654569" cluster setting kubeconfig missing "newest-cni-654569" context setting]
	I1124 09:07:56.161516  740119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:07:56.165172  740119 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:07:56.177468  740119 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1124 09:07:56.177509  740119 kubeadm.go:602] duration metric: took 34.445893ms to restartPrimaryControlPlane
	I1124 09:07:56.177544  740119 kubeadm.go:403] duration metric: took 139.430697ms to StartCluster
	I1124 09:07:56.177569  740119 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:07:56.177697  740119 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:07:56.180208  740119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:07:56.180569  740119 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:07:56.181072  740119 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:07:56.181193  740119 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-654569"
	I1124 09:07:56.181213  740119 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-654569"
	W1124 09:07:56.181228  740119 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:07:56.181233  740119 addons.go:70] Setting default-storageclass=true in profile "newest-cni-654569"
	I1124 09:07:56.181253  740119 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-654569"
	I1124 09:07:56.181258  740119 host.go:66] Checking if "newest-cni-654569" exists ...
	I1124 09:07:56.181259  740119 addons.go:70] Setting dashboard=true in profile "newest-cni-654569"
	I1124 09:07:56.181277  740119 addons.go:239] Setting addon dashboard=true in "newest-cni-654569"
	W1124 09:07:56.181285  740119 addons.go:248] addon dashboard should already be in state true
	I1124 09:07:56.181350  740119 host.go:66] Checking if "newest-cni-654569" exists ...
	I1124 09:07:56.181589  740119 addons.go:70] Setting metrics-server=true in profile "newest-cni-654569"
	I1124 09:07:56.181618  740119 addons.go:239] Setting addon metrics-server=true in "newest-cni-654569"
	W1124 09:07:56.181627  740119 addons.go:248] addon metrics-server should already be in state true
	I1124 09:07:56.181656  740119 host.go:66] Checking if "newest-cni-654569" exists ...
	I1124 09:07:56.181211  740119 config.go:182] Loaded profile config "newest-cni-654569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:07:56.181598  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.181797  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.181818  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.182113  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.188544  740119 out.go:179] * Verifying Kubernetes components...
	I1124 09:07:56.190275  740119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:07:56.212861  740119 addons.go:239] Setting addon default-storageclass=true in "newest-cni-654569"
	W1124 09:07:56.213062  740119 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:07:56.213130  740119 host.go:66] Checking if "newest-cni-654569" exists ...
	I1124 09:07:56.214362  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.218487  740119 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 09:07:56.218482  740119 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:07:56.219671  740119 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:07:56.220234  740119 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:07:56.220300  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:56.221641  740119 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:07:56.223298  740119 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 09:07:54.323974  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	W1124 09:07:56.325833  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	I1124 09:07:56.223300  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:07:56.223453  740119 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:07:56.223558  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:56.224350  740119 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:07:56.224372  740119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:07:56.224430  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:56.242662  740119 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:07:56.242685  740119 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:07:56.242745  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:56.256044  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:56.263870  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:56.266228  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:56.290380  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:56.375577  740119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:07:56.395131  740119 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:07:56.395217  740119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:07:56.405436  740119 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:07:56.406355  740119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 09:07:56.407356  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:07:56.407371  740119 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:07:56.408594  740119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:07:56.417254  740119 api_server.go:72] duration metric: took 236.638633ms to wait for apiserver process to appear ...
	I1124 09:07:56.417983  740119 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:07:56.418024  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:56.425934  740119 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:07:56.425958  740119 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:07:56.426098  740119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:07:56.431454  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:07:56.431492  740119 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:07:56.446759  740119 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:07:56.446786  740119 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:07:56.459386  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:07:56.459415  740119 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:07:56.471259  740119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:07:56.479755  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:07:56.479778  740119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:07:56.496377  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:07:56.496403  740119 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:07:56.512775  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:07:56.512802  740119 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:07:56.529546  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:07:56.529574  740119 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:07:56.543946  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:07:56.543970  740119 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:07:56.559581  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:07:56.559607  740119 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:07:56.574571  740119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:07:58.027053  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:07:58.027087  740119 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:07:58.027101  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:58.039799  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:07:58.039831  740119 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:07:58.419149  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:58.424822  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:07:58.424853  740119 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:07:58.622336  740119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.213680166s)
	I1124 09:07:58.622395  740119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.196259725s)
	I1124 09:07:58.624475  740119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.153162875s)
	I1124 09:07:58.624505  740119 addons.go:495] Verifying addon metrics-server=true in "newest-cni-654569"
	I1124 09:07:58.624568  740119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.049945341s)
	I1124 09:07:58.628984  740119 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-654569 addons enable metrics-server
	
	I1124 09:07:58.634190  740119 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1124 09:07:58.635259  740119 addons.go:530] duration metric: took 2.454203545s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1124 09:07:58.918078  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:58.922902  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:07:58.922941  740119 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:07:59.418541  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:59.422776  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:07:59.423896  740119 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:07:59.423923  740119 api_server.go:131] duration metric: took 3.005920248s to wait for apiserver health ...
	I1124 09:07:59.423937  740119 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:07:59.427925  740119 system_pods.go:59] 9 kube-system pods found
	I1124 09:07:59.427952  740119 system_pods.go:61] "coredns-7d764666f9-x9q9b" [506d2b46-76b4-495b-92ec-1d61d12cdb7c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 09:07:59.427960  740119 system_pods.go:61] "etcd-newest-cni-654569" [0a522704-a865-4e7c-8ebe-d642c5a9818c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:07:59.427969  740119 system_pods.go:61] "kindnet-qnftx" [11feac68-231b-41fd-a5b6-cb38432ab914] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:07:59.427977  740119 system_pods.go:61] "kube-apiserver-newest-cni-654569" [792974fb-5baf-43b4-b16f-984afe8de703] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:07:59.427983  740119 system_pods.go:61] "kube-controller-manager-newest-cni-654569" [4bd5630b-c62e-4b79-83cb-ac16b0119af9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:07:59.427988  740119 system_pods.go:61] "kube-proxy-tnmqt" [c21f06f2-1c7b-4a84-ada1-ce50e281f77d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:07:59.427993  740119 system_pods.go:61] "kube-scheduler-newest-cni-654569" [eadf3127-15eb-4f9f-afc4-00c1e19cacca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:07:59.428001  740119 system_pods.go:61] "metrics-server-5d785b57d4-qhnmt" [ae201e6f-2fb5-4b64-a376-31b95b002461] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 09:07:59.428010  740119 system_pods.go:61] "storage-provisioner" [930332b4-361f-418c-abf4-8d05d08ef9dd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 09:07:59.428016  740119 system_pods.go:74] duration metric: took 4.072733ms to wait for pod list to return data ...
	I1124 09:07:59.428026  740119 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:07:59.430199  740119 default_sa.go:45] found service account: "default"
	I1124 09:07:59.430224  740119 default_sa.go:55] duration metric: took 2.191389ms for default service account to be created ...
	I1124 09:07:59.430236  740119 kubeadm.go:587] duration metric: took 3.249627773s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 09:07:59.430252  740119 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:07:59.432586  740119 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:07:59.432612  740119 node_conditions.go:123] node cpu capacity is 8
	I1124 09:07:59.432631  740119 node_conditions.go:105] duration metric: took 2.37222ms to run NodePressure ...
	I1124 09:07:59.432647  740119 start.go:242] waiting for startup goroutines ...
	I1124 09:07:59.432661  740119 start.go:247] waiting for cluster config update ...
	I1124 09:07:59.432679  740119 start.go:256] writing updated cluster config ...
	I1124 09:07:59.432927  740119 ssh_runner.go:195] Run: rm -f paused
	I1124 09:07:59.491414  740119 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:07:59.492851  740119 out.go:179] * Done! kubectl is now configured to use "newest-cni-654569" cluster and "default" namespace by default
	I1124 09:07:56.020541  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:56.021147  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:56.021210  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:56.021265  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:56.067013  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:56.067049  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:56.067057  685562 cri.go:89] found id: ""
	I1124 09:07:56.067068  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:56.067133  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.072142  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.077032  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:56.077096  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:56.121815  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:56.121844  685562 cri.go:89] found id: ""
	I1124 09:07:56.121854  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:56.121916  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.127997  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:56.128077  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:56.168618  685562 cri.go:89] found id: ""
	I1124 09:07:56.168642  685562 logs.go:282] 0 containers: []
	W1124 09:07:56.168667  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:56.168677  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:56.168742  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:56.218281  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:56.218356  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:56.218373  685562 cri.go:89] found id: ""
	I1124 09:07:56.218393  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:56.218528  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.224636  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.229661  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:56.229765  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:56.293945  685562 cri.go:89] found id: ""
	I1124 09:07:56.293977  685562 logs.go:282] 0 containers: []
	W1124 09:07:56.293988  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:56.293996  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:56.294060  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:56.334478  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:56.334503  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:56.334509  685562 cri.go:89] found id: ""
	I1124 09:07:56.334519  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:56.334580  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.340444  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.345844  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:56.345926  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:56.380080  685562 cri.go:89] found id: ""
	I1124 09:07:56.380105  685562 logs.go:282] 0 containers: []
	W1124 09:07:56.380114  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:56.380122  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:56.380178  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:56.420110  685562 cri.go:89] found id: ""
	I1124 09:07:56.420138  685562 logs.go:282] 0 containers: []
	W1124 09:07:56.420156  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:56.420171  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:56.420193  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:56.442022  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:56.442066  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:56.490969  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:56.491011  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:56.527453  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:56.527506  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:56.562016  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:56.562048  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:07:56.660117  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:56.660153  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:56.718059  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:56.718087  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:07:56.718105  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:56.750284  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:56.750317  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:56.785923  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:56.785954  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:56.821311  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:07:56.821343  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:56.849832  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:56.849859  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:56.884094  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:56.884132  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:59.430422  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:59.430857  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:59.430924  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:59.430985  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:59.460698  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:59.460723  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:59.460729  685562 cri.go:89] found id: ""
	I1124 09:07:59.460739  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:59.460804  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.465196  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.469225  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:59.469304  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:59.502134  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:59.502174  685562 cri.go:89] found id: ""
	I1124 09:07:59.502186  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:59.502243  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.506739  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:59.506808  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:59.539002  685562 cri.go:89] found id: ""
	I1124 09:07:59.539033  685562 logs.go:282] 0 containers: []
	W1124 09:07:59.539045  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:59.539055  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:59.539149  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:59.568146  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:59.568167  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:59.568172  685562 cri.go:89] found id: ""
	I1124 09:07:59.568181  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:59.568248  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.572864  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.577269  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:59.577338  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:59.607818  685562 cri.go:89] found id: ""
	I1124 09:07:59.607848  685562 logs.go:282] 0 containers: []
	W1124 09:07:59.607860  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:59.607869  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:59.607928  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:59.638184  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:59.638205  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:59.638210  685562 cri.go:89] found id: ""
	I1124 09:07:59.638219  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:59.638278  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.642979  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.646971  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:59.647028  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:59.675306  685562 cri.go:89] found id: ""
	I1124 09:07:59.675330  685562 logs.go:282] 0 containers: []
	W1124 09:07:59.675338  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:59.675348  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:59.675396  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:59.702893  685562 cri.go:89] found id: ""
	I1124 09:07:59.702927  685562 logs.go:282] 0 containers: []
	W1124 09:07:59.702940  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:59.702954  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:59.702968  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:59.739374  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:59.739405  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:59.779375  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:59.779419  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:59.835861  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:59.835893  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1124 09:07:58.824571  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	W1124 09:08:01.324130  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	0048863eeab5b       56cc512116c8f       8 seconds ago       Running             busybox                   0                   6a1c58c44dabd       busybox                                                default
	4cb7a2e1543a2       52546a367cc9e       13 seconds ago      Running             coredns                   0                   754bbf6ee037f       coredns-66bc5c9577-xrvmp                               kube-system
	7b6759161aaf7       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   d739ceffcd719       storage-provisioner                                    kube-system
	d61b328ab5ab1       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   94ba4ea8cc394       kindnet-b9gr6                                          kube-system
	d08299c781b5b       8aa150647e88a       24 seconds ago      Running             kube-proxy                0                   2f34fd49731c3       kube-proxy-5hvkq                                       kube-system
	8511ac48cd627       88320b5498ff2       34 seconds ago      Running             kube-scheduler            0                   2a5f4ee9cdbe8       kube-scheduler-default-k8s-diff-port-603918            kube-system
	dd669bd5eb5c8       a3e246e9556e9       34 seconds ago      Running             etcd                      0                   306d5a6f33d85       etcd-default-k8s-diff-port-603918                      kube-system
	ab596f3f89dfb       01e8bacf0f500       34 seconds ago      Running             kube-controller-manager   0                   8792115764e5c       kube-controller-manager-default-k8s-diff-port-603918   kube-system
	2360a77fd7012       a5f569d49a979       34 seconds ago      Running             kube-apiserver            0                   75341afc5f34d       kube-apiserver-default-k8s-diff-port-603918            kube-system
	
	
	==> containerd <==
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.478802026Z" level=info msg="Container 4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.479012008Z" level=info msg="CreateContainer within sandbox \"d739ceffcd719ee21dc72de12352bcc6b46a8ea7096e691b55001bcadbbe3d5b\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58\""
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.479545666Z" level=info msg="StartContainer for \"7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58\""
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.480346722Z" level=info msg="connecting to shim 7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58" address="unix:///run/containerd/s/d778d31c26635c66d6dc4f813da4e6a22952fbeba29440ec23af6ffefe8d0d08" protocol=ttrpc version=3
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.486155548Z" level=info msg="CreateContainer within sandbox \"754bbf6ee037faf2eb0ab5772f9d30688e7f23c89ac6c4b2ede2527106b6acca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7\""
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.486734729Z" level=info msg="StartContainer for \"4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7\""
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.487838064Z" level=info msg="connecting to shim 4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7" address="unix:///run/containerd/s/9462e11f343f3511322fa0215f82b2720128f229d0e5deb7bb15503f13750280" protocol=ttrpc version=3
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.534124607Z" level=info msg="StartContainer for \"7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58\" returns successfully"
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.541247266Z" level=info msg="StartContainer for \"4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7\" returns successfully"
	Nov 24 09:07:52 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:52.963904471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7,Namespace:default,Attempt:0,}"
	Nov 24 09:07:52 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:52.995556019Z" level=info msg="connecting to shim 6a1c58c44dabd14a03b5dfe863c4973f78579ecb41a4fa7ac911166778977c19" address="unix:///run/containerd/s/3786ba091400f81e491ed3aac208c2bb9dc958d4a21390cd1a8551bca30a1796" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 09:07:53 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:53.072731700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7,Namespace:default,Attempt:0,} returns sandbox id \"6a1c58c44dabd14a03b5dfe863c4973f78579ecb41a4fa7ac911166778977c19\""
	Nov 24 09:07:53 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:53.074976810Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.615774305Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.616426464Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.617548446Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.619356653Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.619993882Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.544967961s"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.620039028Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.624039708Z" level=info msg="CreateContainer within sandbox \"6a1c58c44dabd14a03b5dfe863c4973f78579ecb41a4fa7ac911166778977c19\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.630138239Z" level=info msg="Container 0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.635886425Z" level=info msg="CreateContainer within sandbox \"6a1c58c44dabd14a03b5dfe863c4973f78579ecb41a4fa7ac911166778977c19\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0\""
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.636348018Z" level=info msg="StartContainer for \"0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0\""
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.637133578Z" level=info msg="connecting to shim 0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0" address="unix:///run/containerd/s/3786ba091400f81e491ed3aac208c2bb9dc958d4a21390cd1a8551bca30a1796" protocol=ttrpc version=3
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.695423794Z" level=info msg="StartContainer for \"0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0\" returns successfully"
	
	
	==> coredns [4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56982 - 58653 "HINFO IN 4688269613880167346.4194427648079874584. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020568653s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-603918
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-603918
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=default-k8s-diff-port-603918
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_07_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:07:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-603918
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:07:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:07:50 +0000   Mon, 24 Nov 2025 09:07:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:07:50 +0000   Mon, 24 Nov 2025 09:07:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:07:50 +0000   Mon, 24 Nov 2025 09:07:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:07:50 +0000   Mon, 24 Nov 2025 09:07:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-603918
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                18145d9a-fbb9-4960-a6df-c69396b8f79c
	  Boot ID:                    f052cd47-57de-4521-b5fb-139979fdced9
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-xrvmp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-603918                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-b9gr6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-603918             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-603918    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-5hvkq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-603918             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node default-k8s-diff-port-603918 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node default-k8s-diff-port-603918 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node default-k8s-diff-port-603918 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node default-k8s-diff-port-603918 event: Registered Node default-k8s-diff-port-603918 in Controller
	  Normal  NodeReady                14s   kubelet          Node default-k8s-diff-port-603918 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [dd669bd5eb5c858534503bf9a36b221ef9818ee825b047bcb02a309c174d8b48] <==
	{"level":"warn","ts":"2025-11-24T09:07:30.327907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.348098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.354371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.362872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.370490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.377810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.386421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.394604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.401242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.408583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.416123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.423135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.430751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.453228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.460247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.467236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.522898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:34.346220Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.839529ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597273249824879 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" value_size:124 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:07:34.346355Z","caller":"traceutil/trace.go:172","msg":"trace[1201803426] transaction","detail":"{read_only:false; response_revision:260; number_of_response:1; }","duration":"150.764396ms","start":"2025-11-24T09:07:34.195577Z","end":"2025-11-24T09:07:34.346341Z","steps":["trace[1201803426] 'process raft request'  (duration: 38.451987ms)","trace[1201803426] 'compare'  (duration: 111.720232ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:07:34.554980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.226305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-11-24T09:07:34.555037Z","caller":"traceutil/trace.go:172","msg":"trace[1924137252] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:261; }","duration":"118.302739ms","start":"2025-11-24T09:07:34.436720Z","end":"2025-11-24T09:07:34.555022Z","steps":["trace[1924137252] 'agreement among raft nodes before linearized reading'  (duration: 52.872904ms)","trace[1924137252] 'range keys from in-memory index tree'  (duration: 65.264136ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:07:34.555129Z","caller":"traceutil/trace.go:172","msg":"trace[1620803049] transaction","detail":"{read_only:false; response_revision:262; number_of_response:1; }","duration":"189.317797ms","start":"2025-11-24T09:07:34.365775Z","end":"2025-11-24T09:07:34.555093Z","steps":["trace[1620803049] 'process raft request'  (duration: 123.831751ms)","trace[1620803049] 'compare'  (duration: 65.302143ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:07:34.734315Z","caller":"traceutil/trace.go:172","msg":"trace[975367433] transaction","detail":"{read_only:false; response_revision:263; number_of_response:1; }","duration":"139.111949ms","start":"2025-11-24T09:07:34.595180Z","end":"2025-11-24T09:07:34.734292Z","steps":["trace[975367433] 'process raft request'  (duration: 83.348894ms)","trace[975367433] 'compare'  (duration: 55.630472ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:07:34.821844Z","caller":"traceutil/trace.go:172","msg":"trace[942765471] transaction","detail":"{read_only:false; number_of_response:0; response_revision:263; }","duration":"122.875835ms","start":"2025-11-24T09:07:34.698933Z","end":"2025-11-24T09:07:34.821809Z","steps":["trace[942765471] 'process raft request'  (duration: 122.768079ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:07:34.821906Z","caller":"traceutil/trace.go:172","msg":"trace[964689619] transaction","detail":"{read_only:false; number_of_response:0; response_revision:263; }","duration":"122.955322ms","start":"2025-11-24T09:07:34.698933Z","end":"2025-11-24T09:07:34.821888Z","steps":["trace[964689619] 'process raft request'  (duration: 122.83712ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:08:04 up  3:50,  0 user,  load average: 3.48, 3.63, 9.85
	Linux default-k8s-diff-port-603918 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d61b328ab5ab1269962ba5787c878a3ecd23c246f9a62364bfb4b78afc389098] <==
	I1124 09:07:39.814803       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:07:39.815136       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 09:07:39.815276       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:07:39.815294       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:07:39.815319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:07:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:07:40.101193       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:07:40.101229       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:07:40.101254       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:07:40.110763       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:07:40.501891       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:07:40.501928       1 metrics.go:72] Registering metrics
	I1124 09:07:40.502008       1 controller.go:711] "Syncing nftables rules"
	I1124 09:07:50.022679       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:07:50.022724       1 main.go:301] handling current node
	I1124 09:08:00.022555       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:08:00.022617       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2360a77fd7012a398acfbb7b6a080849121db124c72a7255c5b1d2f454bee8e8] <==
	I1124 09:07:31.132172       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:07:31.136040       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:07:31.136063       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:07:31.141192       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:07:31.142578       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:07:31.236592       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:07:31.951000       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:07:31.959171       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:07:31.959724       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:07:32.584208       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:07:32.626372       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:07:32.739252       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:07:32.745183       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 09:07:32.746252       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:07:32.750287       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:07:32.949296       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:07:33.774818       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:07:33.788343       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:07:33.797967       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:07:38.456274       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:07:38.463175       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:07:38.748721       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:07:38.748722       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:07:38.951300       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1124 09:08:02.772709       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:50882: use of closed network connection
	
	
	==> kube-controller-manager [ab596f3f89dfbaa2fced115c34da995ab6bdbb1e8f8fdf34ac0ab8f1fbbe292c] <==
	I1124 09:07:37.954863       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-603918"
	I1124 09:07:37.954937       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 09:07:37.950531       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:07:37.950549       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 09:07:37.950561       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:07:37.950583       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:07:37.950602       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:07:37.950611       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 09:07:37.950629       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:07:37.950641       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:07:37.950659       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 09:07:37.957684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:07:37.950668       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:07:37.950676       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:07:37.951952       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:07:37.951982       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:07:37.957860       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:07:37.958583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 09:07:37.959926       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 09:07:37.964432       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 09:07:37.974514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:07:37.974537       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 09:07:37.974548       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 09:07:37.974576       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:07:52.957603       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d08299c781b5bb99b671160da6d283abbaf60a124f5358cb647fbe5f2a4706bc] <==
	I1124 09:07:39.361203       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:07:39.424742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:07:39.525130       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:07:39.525172       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 09:07:39.525309       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:07:39.546419       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:07:39.546498       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:07:39.551771       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:07:39.552128       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:07:39.552171       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:07:39.553631       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:07:39.553655       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:07:39.553685       1 config.go:200] "Starting service config controller"
	I1124 09:07:39.553691       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:07:39.553773       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:07:39.553797       1 config.go:309] "Starting node config controller"
	I1124 09:07:39.553808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:07:39.553815       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:07:39.553817       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:07:39.654672       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:07:39.654696       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:07:39.654776       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8511ac48cd627d9ff60b0149b23f93346ef69d770e4169764582c1c9a39fd342] <==
	E1124 09:07:30.994226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:07:30.994537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:07:30.994691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:07:30.995025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:07:30.995356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:07:30.995404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:07:30.995502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:07:31.834821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:07:31.854357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:07:31.916714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:07:31.968305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:07:31.995644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:07:32.008685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:07:32.034293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:07:32.045688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:07:32.078528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:07:32.114248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 09:07:32.144014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:07:32.163314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:07:32.163412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:07:32.173613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:07:32.286339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:07:32.380746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:07:32.411245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1124 09:07:34.791706       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:07:34 default-k8s-diff-port-603918 kubelet[1469]: E1124 09:07:34.826567    1469 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-603918\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-603918"
	Nov 24 09:07:34 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:34.843353    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-603918" podStartSLOduration=1.8432976110000001 podStartE2EDuration="1.843297611s" podCreationTimestamp="2025-11-24 09:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:34.841934644 +0000 UTC m=+1.285704943" watchObservedRunningTime="2025-11-24 09:07:34.843297611 +0000 UTC m=+1.287067887"
	Nov 24 09:07:34 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:34.856503    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-603918" podStartSLOduration=1.8564835579999999 podStartE2EDuration="1.856483558s" podCreationTimestamp="2025-11-24 09:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:34.856386383 +0000 UTC m=+1.300156679" watchObservedRunningTime="2025-11-24 09:07:34.856483558 +0000 UTC m=+1.300253835"
	Nov 24 09:07:34 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:34.883844    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-603918" podStartSLOduration=1.883822646 podStartE2EDuration="1.883822646s" podCreationTimestamp="2025-11-24 09:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:34.881332354 +0000 UTC m=+1.325102631" watchObservedRunningTime="2025-11-24 09:07:34.883822646 +0000 UTC m=+1.327592930"
	Nov 24 09:07:37 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:37.948739    1469 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:07:37 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:37.952798    1469 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788750    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66cc3c18-98b4-47fa-a69c-90041bacd287-kube-proxy\") pod \"kube-proxy-5hvkq\" (UID: \"66cc3c18-98b4-47fa-a69c-90041bacd287\") " pod="kube-system/kube-proxy-5hvkq"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788797    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53f892c9-f95c-488d-886b-87b4d981b058-xtables-lock\") pod \"kindnet-b9gr6\" (UID: \"53f892c9-f95c-488d-886b-87b4d981b058\") " pod="kube-system/kindnet-b9gr6"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788812    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53f892c9-f95c-488d-886b-87b4d981b058-lib-modules\") pod \"kindnet-b9gr6\" (UID: \"53f892c9-f95c-488d-886b-87b4d981b058\") " pod="kube-system/kindnet-b9gr6"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788833    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkzkh\" (UniqueName: \"kubernetes.io/projected/53f892c9-f95c-488d-886b-87b4d981b058-kube-api-access-tkzkh\") pod \"kindnet-b9gr6\" (UID: \"53f892c9-f95c-488d-886b-87b4d981b058\") " pod="kube-system/kindnet-b9gr6"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788945    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66cc3c18-98b4-47fa-a69c-90041bacd287-xtables-lock\") pod \"kube-proxy-5hvkq\" (UID: \"66cc3c18-98b4-47fa-a69c-90041bacd287\") " pod="kube-system/kube-proxy-5hvkq"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788970    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/53f892c9-f95c-488d-886b-87b4d981b058-cni-cfg\") pod \"kindnet-b9gr6\" (UID: \"53f892c9-f95c-488d-886b-87b4d981b058\") " pod="kube-system/kindnet-b9gr6"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.789040    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k54s8\" (UniqueName: \"kubernetes.io/projected/66cc3c18-98b4-47fa-a69c-90041bacd287-kube-api-access-k54s8\") pod \"kube-proxy-5hvkq\" (UID: \"66cc3c18-98b4-47fa-a69c-90041bacd287\") " pod="kube-system/kube-proxy-5hvkq"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.789074    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66cc3c18-98b4-47fa-a69c-90041bacd287-lib-modules\") pod \"kube-proxy-5hvkq\" (UID: \"66cc3c18-98b4-47fa-a69c-90041bacd287\") " pod="kube-system/kube-proxy-5hvkq"
	Nov 24 09:07:39 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:39.787884    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5hvkq" podStartSLOduration=1.787863712 podStartE2EDuration="1.787863712s" podCreationTimestamp="2025-11-24 09:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:39.787832474 +0000 UTC m=+6.231602750" watchObservedRunningTime="2025-11-24 09:07:39.787863712 +0000 UTC m=+6.231633989"
	Nov 24 09:07:39 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:39.800158    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-b9gr6" podStartSLOduration=1.8001378670000001 podStartE2EDuration="1.800137867s" podCreationTimestamp="2025-11-24 09:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:39.79985624 +0000 UTC m=+6.243626520" watchObservedRunningTime="2025-11-24 09:07:39.800137867 +0000 UTC m=+6.243908144"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.039378    1469 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.171891    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptz72\" (UniqueName: \"kubernetes.io/projected/1081180d-32ee-417f-aea3-ba27c3ee7c30-kube-api-access-ptz72\") pod \"storage-provisioner\" (UID: \"1081180d-32ee-417f-aea3-ba27c3ee7c30\") " pod="kube-system/storage-provisioner"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.171943    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1081180d-32ee-417f-aea3-ba27c3ee7c30-tmp\") pod \"storage-provisioner\" (UID: \"1081180d-32ee-417f-aea3-ba27c3ee7c30\") " pod="kube-system/storage-provisioner"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.171962    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33252e00-03f6-4116-98b4-ffd795b3bce8-config-volume\") pod \"coredns-66bc5c9577-xrvmp\" (UID: \"33252e00-03f6-4116-98b4-ffd795b3bce8\") " pod="kube-system/coredns-66bc5c9577-xrvmp"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.171978    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgkf\" (UniqueName: \"kubernetes.io/projected/33252e00-03f6-4116-98b4-ffd795b3bce8-kube-api-access-4vgkf\") pod \"coredns-66bc5c9577-xrvmp\" (UID: \"33252e00-03f6-4116-98b4-ffd795b3bce8\") " pod="kube-system/coredns-66bc5c9577-xrvmp"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.744573    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xrvmp" podStartSLOduration=11.744549804 podStartE2EDuration="11.744549804s" podCreationTimestamp="2025-11-24 09:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:50.744300654 +0000 UTC m=+17.188070934" watchObservedRunningTime="2025-11-24 09:07:50.744549804 +0000 UTC m=+17.188320081"
	Nov 24 09:07:52 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:52.650176    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.65014873 podStartE2EDuration="14.65014873s" podCreationTimestamp="2025-11-24 09:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:50.762642303 +0000 UTC m=+17.206412602" watchObservedRunningTime="2025-11-24 09:07:52.65014873 +0000 UTC m=+19.093919006"
	Nov 24 09:07:52 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:52.688323    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnq6\" (UniqueName: \"kubernetes.io/projected/4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7-kube-api-access-gxnq6\") pod \"busybox\" (UID: \"4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7\") " pod="default/busybox"
	Nov 24 09:07:55 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:55.763587    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.21703949 podStartE2EDuration="3.763566605s" podCreationTimestamp="2025-11-24 09:07:52 +0000 UTC" firstStartedPulling="2025-11-24 09:07:53.074412301 +0000 UTC m=+19.518182586" lastFinishedPulling="2025-11-24 09:07:55.620939428 +0000 UTC m=+22.064709701" observedRunningTime="2025-11-24 09:07:55.762989698 +0000 UTC m=+22.206759992" watchObservedRunningTime="2025-11-24 09:07:55.763566605 +0000 UTC m=+22.207336884"
	
	
	==> storage-provisioner [7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58] <==
	I1124 09:07:50.545602       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:07:50.553411       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:07:50.553518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:07:50.555676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:50.561356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:07:50.561559       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:07:50.561702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba6dd8d8-4ce9-40d3-9df4-feec65d10000", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-603918_cd8e94d8-c639-4b1f-8d52-71d384f58406 became leader
	I1124 09:07:50.561751       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-603918_cd8e94d8-c639-4b1f-8d52-71d384f58406!
	W1124 09:07:50.563681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:50.566772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:07:50.662270       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-603918_cd8e94d8-c639-4b1f-8d52-71d384f58406!
	W1124 09:07:52.569721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:52.576609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:54.580296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:54.584090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:56.587718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:56.592251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:58.596203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:58.600645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:00.604133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:00.609198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:02.612965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:02.617349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-603918 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-603918
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-603918:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d",
	        "Created": "2025-11-24T09:07:14.844491638Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 730402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T09:07:14.880864922Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d/hosts",
	        "LogPath": "/var/lib/docker/containers/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d/1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d-json.log",
	        "Name": "/default-k8s-diff-port-603918",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-603918:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-603918",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f53ec0281671d6f9992164c99b884d156fb7576117b2a2ff643f0011175139d",
	                "LowerDir": "/var/lib/docker/overlay2/4eae6b16079bac56ef36203e4e58682749e5349afe43e78bc4493341a6fdab7b-init/diff:/var/lib/docker/overlay2/a062700147ad5d1f8f2a68f70ba6ad34ea6495dd365bcb265ab17ea27961837b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4eae6b16079bac56ef36203e4e58682749e5349afe43e78bc4493341a6fdab7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4eae6b16079bac56ef36203e4e58682749e5349afe43e78bc4493341a6fdab7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4eae6b16079bac56ef36203e4e58682749e5349afe43e78bc4493341a6fdab7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-603918",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-603918/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-603918",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-603918",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-603918",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "95e5aae672169464be761abd76dd20b5159df26b725f234432872b2158f40b29",
	            "SandboxKey": "/var/run/docker/netns/95e5aae67216",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-603918": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6224c7b85f0c971619d603b2dfeda75632e2c76d5aeff59e17162534427abf2e",
	                    "EndpointID": "86e0de6394802760d5a713ac2fc670727f1106744eaf7883f423e4f351f91ca9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "9a:2b:82:f9:c8:7c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-603918",
	                        "1f53ec028167"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-603918 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-603918 logs -n 25: (1.071219287s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ ssh     │ -p kubenet-203355 sudo cat /etc/resolv.conf                                                                                                                                                                                                                │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo crictl pods                                                                                                                                                                                                                         │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo crictl ps --all                                                                                                                                                                                                                     │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                              │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo ip a s                                                                                                                                                                                                                              │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo ip r s                                                                                                                                                                                                                              │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo iptables-save                                                                                                                                                                                                                       │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo iptables -t nat -L -n -v                                                                                                                                                                                                            │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo systemctl status kubelet --all --full --no-pager                                                                                                                                                                                    │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo systemctl cat kubelet --no-pager                                                                                                                                                                                                    │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                                     │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                                    │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ ssh     │ -p kubenet-203355 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                                    │ kubenet-203355               │ jenkins │ v1.37.0 │ 24 Nov 25 09:04 UTC │                     │
	│ start   │ -p default-k8s-diff-port-603918 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-603918 │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ start   │ -p newest-cni-654569 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-841285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-841285           │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ start   │ -p embed-certs-841285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-841285           │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-654569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ stop    │ -p newest-cni-654569 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-654569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ start   │ -p newest-cni-654569 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:07 UTC │
	│ image   │ newest-cni-654569 image list --format=json                                                                                                                                                                                                                 │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:07 UTC │ 24 Nov 25 09:08 UTC │
	│ pause   │ -p newest-cni-654569 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:08 UTC │ 24 Nov 25 09:08 UTC │
	│ unpause │ -p newest-cni-654569 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:08 UTC │ 24 Nov 25 09:08 UTC │
	│ delete  │ -p newest-cni-654569                                                                                                                                                                                                                                       │ newest-cni-654569            │ jenkins │ v1.37.0 │ 24 Nov 25 09:08 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 09:07:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 09:07:48.085172  740119 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:07:48.085422  740119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:07:48.085431  740119 out.go:374] Setting ErrFile to fd 2...
	I1124 09:07:48.085435  740119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:07:48.085654  740119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:07:48.086098  740119 out.go:368] Setting JSON to false
	I1124 09:07:48.087476  740119 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13804,"bootTime":1763961464,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:07:48.087538  740119 start.go:143] virtualization: kvm guest
	I1124 09:07:48.089341  740119 out.go:179] * [newest-cni-654569] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:07:48.090342  740119 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:07:48.090357  740119 notify.go:221] Checking for updates...
	I1124 09:07:48.092506  740119 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:07:48.093570  740119 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:07:48.094577  740119 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:07:48.095525  740119 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:07:48.096560  740119 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:07:48.098935  740119 config.go:182] Loaded profile config "newest-cni-654569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:07:48.099441  740119 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:07:48.123883  740119 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:07:48.123985  740119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:07:48.180131  740119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:07:48.170292777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:07:48.180255  740119 docker.go:319] overlay module found
	I1124 09:07:48.181756  740119 out.go:179] * Using the docker driver based on existing profile
	I1124 09:07:48.182725  740119 start.go:309] selected driver: docker
	I1124 09:07:48.182739  740119 start.go:927] validating driver "docker" against &{Name:newest-cni-654569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:07:48.182835  740119 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:07:48.183414  740119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:07:48.245922  740119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 09:07:48.236204951 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:07:48.246244  740119 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 09:07:48.246289  740119 cni.go:84] Creating CNI manager for ""
	I1124 09:07:48.246353  740119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:07:48.246408  740119 start.go:353] cluster config:
	{Name:newest-cni-654569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:07:48.248109  740119 out.go:179] * Starting "newest-cni-654569" primary control-plane node in "newest-cni-654569" cluster
	I1124 09:07:48.249170  740119 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 09:07:48.250253  740119 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 09:07:48.251335  740119 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:07:48.251399  740119 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 09:07:48.272221  740119 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 09:07:48.272245  740119 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	W1124 09:07:48.359770  740119 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	W1124 09:07:48.393606  740119 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I1124 09:07:48.393800  740119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/config.json ...
	I1124 09:07:48.393944  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:48.394108  740119 cache.go:243] Successfully downloaded all kic artifacts
	I1124 09:07:48.394156  740119 start.go:360] acquireMachinesLock for newest-cni-654569: {Name:mk77a4f7dd1c44df67b8fabeed9184a8f376f91c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:48.394277  740119 start.go:364] duration metric: took 68.815µs to acquireMachinesLock for "newest-cni-654569"
	I1124 09:07:48.394301  740119 start.go:96] Skipping create...Using existing machine configuration
	I1124 09:07:48.394308  740119 fix.go:54] fixHost starting: 
	I1124 09:07:48.394636  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:48.414582  740119 fix.go:112] recreateIfNeeded on newest-cni-654569: state=Stopped err=<nil>
	W1124 09:07:48.414626  740119 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 09:07:46.193942  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:46.194408  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:46.194525  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:46.194585  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:46.240281  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:46.240304  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:46.240310  685562 cri.go:89] found id: ""
	I1124 09:07:46.240319  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:46.240383  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.245436  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.250118  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:46.250185  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:46.280616  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:46.280638  685562 cri.go:89] found id: ""
	I1124 09:07:46.280650  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:46.280714  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.285684  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:46.285748  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:46.318854  685562 cri.go:89] found id: ""
	I1124 09:07:46.318885  685562 logs.go:282] 0 containers: []
	W1124 09:07:46.318898  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:46.319198  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:46.319291  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:46.365180  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:46.365209  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:46.365215  685562 cri.go:89] found id: ""
	I1124 09:07:46.365227  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:46.365285  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.370948  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.376120  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:46.376278  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:46.408937  685562 cri.go:89] found id: ""
	I1124 09:07:46.408967  685562 logs.go:282] 0 containers: []
	W1124 09:07:46.408978  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:46.408987  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:46.409050  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:46.439842  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:46.439865  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:46.439871  685562 cri.go:89] found id: ""
	I1124 09:07:46.439880  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:46.439941  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.444872  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:46.449213  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:46.449282  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:46.483632  685562 cri.go:89] found id: ""
	I1124 09:07:46.483668  685562 logs.go:282] 0 containers: []
	W1124 09:07:46.483681  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:46.483690  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:46.483751  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:46.515552  685562 cri.go:89] found id: ""
	I1124 09:07:46.515583  685562 logs.go:282] 0 containers: []
	W1124 09:07:46.515595  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:46.515661  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:46.515691  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:46.530847  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:46.530884  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:46.594391  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:46.594420  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:46.594439  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:46.631540  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:46.631571  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:46.670437  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:46.670479  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:46.709947  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:07:46.709980  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:46.741928  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:46.741957  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:46.785347  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:46.785378  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:46.819216  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:46.819246  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:07:46.913672  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:07:46.913715  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:46.948732  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:46.948764  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:46.978072  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:46.978099  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:49.524951  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:49.525424  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:49.525516  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:49.525571  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:49.553150  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:49.553170  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:49.553173  685562 cri.go:89] found id: ""
	I1124 09:07:49.553181  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:49.553234  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.557530  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.561623  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:49.561685  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:49.588210  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:49.588230  685562 cri.go:89] found id: ""
	I1124 09:07:49.588248  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:49.588308  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.592320  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:49.592401  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:49.619927  685562 cri.go:89] found id: ""
	I1124 09:07:49.619953  685562 logs.go:282] 0 containers: []
	W1124 09:07:49.619961  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:49.619968  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:49.620024  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:49.646508  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:49.646528  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:49.646532  685562 cri.go:89] found id: ""
	I1124 09:07:49.646539  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:49.646588  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.650850  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.654694  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:49.654752  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:49.682858  685562 cri.go:89] found id: ""
	I1124 09:07:49.682891  685562 logs.go:282] 0 containers: []
	W1124 09:07:49.682903  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:49.682911  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:49.682982  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:49.710107  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:49.710134  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:49.710140  685562 cri.go:89] found id: ""
	I1124 09:07:49.710150  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:49.710225  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.714861  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:49.718812  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:49.718872  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:49.746561  685562 cri.go:89] found id: ""
	I1124 09:07:49.746593  685562 logs.go:282] 0 containers: []
	W1124 09:07:49.746606  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:49.746615  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:49.746669  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:49.774674  685562 cri.go:89] found id: ""
	I1124 09:07:49.774699  685562 logs.go:282] 0 containers: []
	W1124 09:07:49.774707  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:49.774717  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:49.774731  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1124 09:07:47.211797  728321 node_ready.go:57] node "default-k8s-diff-port-603918" has "Ready":"False" status (will retry)
	W1124 09:07:49.710953  728321 node_ready.go:57] node "default-k8s-diff-port-603918" has "Ready":"False" status (will retry)
	I1124 09:07:50.211800  728321 node_ready.go:49] node "default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:50.211830  728321 node_ready.go:38] duration metric: took 11.503977315s for node "default-k8s-diff-port-603918" to be "Ready" ...
	I1124 09:07:50.211847  728321 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:07:50.211891  728321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:07:50.225299  728321 api_server.go:72] duration metric: took 11.802560258s to wait for apiserver process to appear ...
	I1124 09:07:50.225333  728321 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:07:50.225370  728321 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 09:07:50.230797  728321 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 09:07:50.231792  728321 api_server.go:141] control plane version: v1.34.2
	I1124 09:07:50.231821  728321 api_server.go:131] duration metric: took 6.479948ms to wait for apiserver health ...
	I1124 09:07:50.231834  728321 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:07:50.234788  728321 system_pods.go:59] 8 kube-system pods found
	I1124 09:07:50.234838  728321 system_pods.go:61] "coredns-66bc5c9577-xrvmp" [33252e00-03f6-4116-98b4-ffd795b3bce8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:07:50.234851  728321 system_pods.go:61] "etcd-default-k8s-diff-port-603918" [48914200-8900-4bb2-abe0-83dda320f67c] Running
	I1124 09:07:50.234864  728321 system_pods.go:61] "kindnet-b9gr6" [53f892c9-f95c-488d-886b-87b4d981b058] Running
	I1124 09:07:50.234870  728321 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-603918" [fd7c4392-7b1f-49b7-ae71-c3d85585a4bb] Running
	I1124 09:07:50.234876  728321 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-603918" [7ae71128-323b-4d75-9716-2911dfc3eff1] Running
	I1124 09:07:50.234882  728321 system_pods.go:61] "kube-proxy-5hvkq" [66cc3c18-98b4-47fa-a69c-90041bacd287] Running
	I1124 09:07:50.234888  728321 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-603918" [33d67c96-b92a-4ebb-a850-62f5984bf88b] Running
	I1124 09:07:50.234897  728321 system_pods.go:61] "storage-provisioner" [1081180d-32ee-417f-aea3-ba27c3ee7c30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:07:50.234909  728321 system_pods.go:74] duration metric: took 3.067184ms to wait for pod list to return data ...
	I1124 09:07:50.234922  728321 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:07:50.237471  728321 default_sa.go:45] found service account: "default"
	I1124 09:07:50.237497  728321 default_sa.go:55] duration metric: took 2.56863ms for default service account to be created ...
	I1124 09:07:50.237507  728321 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 09:07:50.240092  728321 system_pods.go:86] 8 kube-system pods found
	I1124 09:07:50.240131  728321 system_pods.go:89] "coredns-66bc5c9577-xrvmp" [33252e00-03f6-4116-98b4-ffd795b3bce8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:07:50.240141  728321 system_pods.go:89] "etcd-default-k8s-diff-port-603918" [48914200-8900-4bb2-abe0-83dda320f67c] Running
	I1124 09:07:50.240158  728321 system_pods.go:89] "kindnet-b9gr6" [53f892c9-f95c-488d-886b-87b4d981b058] Running
	I1124 09:07:50.240164  728321 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-603918" [fd7c4392-7b1f-49b7-ae71-c3d85585a4bb] Running
	I1124 09:07:50.240170  728321 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-603918" [7ae71128-323b-4d75-9716-2911dfc3eff1] Running
	I1124 09:07:50.240182  728321 system_pods.go:89] "kube-proxy-5hvkq" [66cc3c18-98b4-47fa-a69c-90041bacd287] Running
	I1124 09:07:50.240186  728321 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-603918" [33d67c96-b92a-4ebb-a850-62f5984bf88b] Running
	I1124 09:07:50.240196  728321 system_pods.go:89] "storage-provisioner" [1081180d-32ee-417f-aea3-ba27c3ee7c30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:07:50.240226  728321 retry.go:31] will retry after 206.653018ms: missing components: kube-dns
	I1124 09:07:50.452255  728321 system_pods.go:86] 8 kube-system pods found
	I1124 09:07:50.452299  728321 system_pods.go:89] "coredns-66bc5c9577-xrvmp" [33252e00-03f6-4116-98b4-ffd795b3bce8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 09:07:50.452305  728321 system_pods.go:89] "etcd-default-k8s-diff-port-603918" [48914200-8900-4bb2-abe0-83dda320f67c] Running
	I1124 09:07:50.452311  728321 system_pods.go:89] "kindnet-b9gr6" [53f892c9-f95c-488d-886b-87b4d981b058] Running
	I1124 09:07:50.452315  728321 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-603918" [fd7c4392-7b1f-49b7-ae71-c3d85585a4bb] Running
	I1124 09:07:50.452318  728321 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-603918" [7ae71128-323b-4d75-9716-2911dfc3eff1] Running
	I1124 09:07:50.452321  728321 system_pods.go:89] "kube-proxy-5hvkq" [66cc3c18-98b4-47fa-a69c-90041bacd287] Running
	I1124 09:07:50.452325  728321 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-603918" [33d67c96-b92a-4ebb-a850-62f5984bf88b] Running
	I1124 09:07:50.452329  728321 system_pods.go:89] "storage-provisioner" [1081180d-32ee-417f-aea3-ba27c3ee7c30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 09:07:50.452355  728321 retry.go:31] will retry after 367.625451ms: missing components: kube-dns
	I1124 09:07:50.824329  728321 system_pods.go:86] 8 kube-system pods found
	I1124 09:07:50.824357  728321 system_pods.go:89] "coredns-66bc5c9577-xrvmp" [33252e00-03f6-4116-98b4-ffd795b3bce8] Running
	I1124 09:07:50.824363  728321 system_pods.go:89] "etcd-default-k8s-diff-port-603918" [48914200-8900-4bb2-abe0-83dda320f67c] Running
	I1124 09:07:50.824367  728321 system_pods.go:89] "kindnet-b9gr6" [53f892c9-f95c-488d-886b-87b4d981b058] Running
	I1124 09:07:50.824371  728321 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-603918" [fd7c4392-7b1f-49b7-ae71-c3d85585a4bb] Running
	I1124 09:07:50.824374  728321 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-603918" [7ae71128-323b-4d75-9716-2911dfc3eff1] Running
	I1124 09:07:50.824384  728321 system_pods.go:89] "kube-proxy-5hvkq" [66cc3c18-98b4-47fa-a69c-90041bacd287] Running
	I1124 09:07:50.824388  728321 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-603918" [33d67c96-b92a-4ebb-a850-62f5984bf88b] Running
	I1124 09:07:50.824392  728321 system_pods.go:89] "storage-provisioner" [1081180d-32ee-417f-aea3-ba27c3ee7c30] Running
	I1124 09:07:50.824400  728321 system_pods.go:126] duration metric: took 586.886497ms to wait for k8s-apps to be running ...
	I1124 09:07:50.824412  728321 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 09:07:50.824490  728321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 09:07:50.837644  728321 system_svc.go:56] duration metric: took 13.224987ms WaitForService to wait for kubelet
	I1124 09:07:50.837669  728321 kubeadm.go:587] duration metric: took 12.414938686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 09:07:50.837685  728321 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:07:50.840072  728321 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:07:50.840098  728321 node_conditions.go:123] node cpu capacity is 8
	I1124 09:07:50.840117  728321 node_conditions.go:105] duration metric: took 2.426436ms to run NodePressure ...
	I1124 09:07:50.840133  728321 start.go:242] waiting for startup goroutines ...
	I1124 09:07:50.840147  728321 start.go:247] waiting for cluster config update ...
	I1124 09:07:50.840161  728321 start.go:256] writing updated cluster config ...
	I1124 09:07:50.840487  728321 ssh_runner.go:195] Run: rm -f paused
	I1124 09:07:50.844243  728321 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:07:50.847626  728321 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xrvmp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.851574  728321 pod_ready.go:94] pod "coredns-66bc5c9577-xrvmp" is "Ready"
	I1124 09:07:50.851600  728321 pod_ready.go:86] duration metric: took 3.950663ms for pod "coredns-66bc5c9577-xrvmp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.853329  728321 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.856853  728321 pod_ready.go:94] pod "etcd-default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:50.856873  728321 pod_ready.go:86] duration metric: took 3.526484ms for pod "etcd-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.858612  728321 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.862325  728321 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:50.862346  728321 pod_ready.go:86] duration metric: took 3.715322ms for pod "kube-apiserver-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:50.863994  728321 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 09:07:47.324158  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	W1124 09:07:49.324287  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	W1124 09:07:51.824159  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	I1124 09:07:51.248382  728321 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:51.248416  728321 pod_ready.go:86] duration metric: took 384.402391ms for pod "kube-controller-manager-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:51.448446  728321 pod_ready.go:83] waiting for pod "kube-proxy-5hvkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:51.848140  728321 pod_ready.go:94] pod "kube-proxy-5hvkq" is "Ready"
	I1124 09:07:51.848166  728321 pod_ready.go:86] duration metric: took 399.659801ms for pod "kube-proxy-5hvkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:52.049612  728321 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:52.449194  728321 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-603918" is "Ready"
	I1124 09:07:52.449217  728321 pod_ready.go:86] duration metric: took 399.576687ms for pod "kube-scheduler-default-k8s-diff-port-603918" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 09:07:52.449234  728321 pod_ready.go:40] duration metric: took 1.604961347s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 09:07:52.494045  728321 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1124 09:07:52.496103  728321 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-603918" cluster and "default" namespace by default
	I1124 09:07:48.416413  740119 out.go:252] * Restarting existing docker container for "newest-cni-654569" ...
	I1124 09:07:48.416505  740119 cli_runner.go:164] Run: docker start newest-cni-654569
	I1124 09:07:48.699338  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:48.719279  740119 kic.go:430] container "newest-cni-654569" state is running.
	I1124 09:07:48.719771  740119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-654569
	I1124 09:07:48.721000  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:48.740378  740119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/config.json ...
	I1124 09:07:48.740650  740119 machine.go:94] provisionDockerMachine start ...
	I1124 09:07:48.740713  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:48.761816  740119 main.go:143] libmachine: Using SSH client type: native
	I1124 09:07:48.762152  740119 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1124 09:07:48.762171  740119 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 09:07:48.762773  740119 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34874->127.0.0.1:33103: read: connection reset by peer
	I1124 09:07:49.060328  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:49.389088  740119 cache.go:107] acquiring lock: {Name:mkbcabeb5a23ff077ffdad64c71e9fe699d94040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389135  740119 cache.go:107] acquiring lock: {Name:mk7f052905284f586f4f1cf24b8c34cc48e0b85b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389131  740119 cache.go:107] acquiring lock: {Name:mk92c82896924ab47423467b25ccd98ee4128baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389100  740119 cache.go:107] acquiring lock: {Name:mk8023690ce5b18d9a1789b2f878bf92c1381799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389143  740119 cache.go:107] acquiring lock: {Name:mkf3a006b133f81ed32779d427a8d0a9b25f9000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389225  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1124 09:07:49.389225  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 09:07:49.389237  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1124 09:07:49.389249  740119 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 113.053µs
	I1124 09:07:49.389253  740119 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 188.199µs
	I1124 09:07:49.389259  740119 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 126.312µs
	I1124 09:07:49.389265  740119 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1124 09:07:49.389265  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 09:07:49.389269  740119 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 09:07:49.389248  740119 cache.go:107] acquiring lock: {Name:mk1d635b72f6d026600360916178f900a450350e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389284  740119 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 161.366µs
	I1124 09:07:49.389106  740119 cache.go:107] acquiring lock: {Name:mkd74819cb24442927f7fb2cffd47478de40e14c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389296  740119 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 09:07:49.389272  740119 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1124 09:07:49.389287  740119 cache.go:107] acquiring lock: {Name:mk6b573bbd33cfc3c3f77668030fb064598572fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 09:07:49.389415  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1124 09:07:49.389425  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1124 09:07:49.389437  740119 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 258.146µs
	I1124 09:07:49.389445  740119 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 238.909µs
	I1124 09:07:49.389455  740119 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1124 09:07:49.389475  740119 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1124 09:07:49.389430  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1124 09:07:49.389496  740119 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 408.179µs
	I1124 09:07:49.389507  740119 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1124 09:07:49.389546  740119 cache.go:115] /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 exists
	I1124 09:07:49.389568  740119 cache.go:96] cache image "registry.k8s.io/etcd:3.5.24-0" -> "/home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0" took 469.236µs
	I1124 09:07:49.389578  740119 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.24-0 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.24-0 succeeded
	I1124 09:07:49.389595  740119 cache.go:87] Successfully saved all images to host disk.
	I1124 09:07:51.905216  740119 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-654569
	
	I1124 09:07:51.905256  740119 ubuntu.go:182] provisioning hostname "newest-cni-654569"
	I1124 09:07:51.905343  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:51.923076  740119 main.go:143] libmachine: Using SSH client type: native
	I1124 09:07:51.923312  740119 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1124 09:07:51.923327  740119 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-654569 && echo "newest-cni-654569" | sudo tee /etc/hostname
	I1124 09:07:52.074711  740119 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-654569
	
	I1124 09:07:52.074778  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.093045  740119 main.go:143] libmachine: Using SSH client type: native
	I1124 09:07:52.093342  740119 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1124 09:07:52.093370  740119 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-654569' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-654569/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-654569' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 09:07:52.236140  740119 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 09:07:52.236193  740119 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21978-435860/.minikube CaCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21978-435860/.minikube}
	I1124 09:07:52.236221  740119 ubuntu.go:190] setting up certificates
	I1124 09:07:52.236242  740119 provision.go:84] configureAuth start
	I1124 09:07:52.236302  740119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-654569
	I1124 09:07:52.255013  740119 provision.go:143] copyHostCerts
	I1124 09:07:52.255080  740119 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem, removing ...
	I1124 09:07:52.255100  740119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem
	I1124 09:07:52.255181  740119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/ca.pem (1082 bytes)
	I1124 09:07:52.255372  740119 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem, removing ...
	I1124 09:07:52.255389  740119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem
	I1124 09:07:52.255433  740119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/cert.pem (1123 bytes)
	I1124 09:07:52.255544  740119 exec_runner.go:144] found /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem, removing ...
	I1124 09:07:52.255554  740119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem
	I1124 09:07:52.255583  740119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21978-435860/.minikube/key.pem (1675 bytes)
	I1124 09:07:52.255650  740119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem org=jenkins.newest-cni-654569 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-654569]
	I1124 09:07:52.306365  740119 provision.go:177] copyRemoteCerts
	I1124 09:07:52.306413  740119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 09:07:52.306447  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.324740  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.426510  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 09:07:52.444101  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 09:07:52.462132  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 09:07:52.479956  740119 provision.go:87] duration metric: took 243.697789ms to configureAuth
	I1124 09:07:52.479981  740119 ubuntu.go:206] setting minikube options for container-runtime
	I1124 09:07:52.480188  740119 config.go:182] Loaded profile config "newest-cni-654569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:07:52.480205  740119 machine.go:97] duration metric: took 3.739539072s to provisionDockerMachine
	I1124 09:07:52.480216  740119 start.go:293] postStartSetup for "newest-cni-654569" (driver="docker")
	I1124 09:07:52.480234  740119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 09:07:52.480319  740119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 09:07:52.480368  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.501120  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.607590  740119 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 09:07:52.611746  740119 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 09:07:52.611770  740119 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 09:07:52.611782  740119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/addons for local assets ...
	I1124 09:07:52.611845  740119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21978-435860/.minikube/files for local assets ...
	I1124 09:07:52.611937  740119 filesync.go:149] local asset: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem -> 4395242.pem in /etc/ssl/certs
	I1124 09:07:52.612044  740119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 09:07:52.619818  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:07:52.639936  740119 start.go:296] duration metric: took 159.699932ms for postStartSetup
	I1124 09:07:52.640022  740119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 09:07:52.640071  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.663072  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.763563  740119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 09:07:52.768496  740119 fix.go:56] duration metric: took 4.374175847s for fixHost
	I1124 09:07:52.768522  740119 start.go:83] releasing machines lock for "newest-cni-654569", held for 4.374229582s
	I1124 09:07:52.768590  740119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-654569
	I1124 09:07:52.788989  740119 ssh_runner.go:195] Run: cat /version.json
	I1124 09:07:52.789040  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.789095  740119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 09:07:52.789155  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:52.810188  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.810852  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:52.968219  740119 ssh_runner.go:195] Run: systemctl --version
	I1124 09:07:52.976647  740119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 09:07:52.982167  740119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 09:07:52.982248  740119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 09:07:52.991521  740119 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 09:07:52.991541  740119 start.go:496] detecting cgroup driver to use...
	I1124 09:07:52.991575  740119 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 09:07:52.991634  740119 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 09:07:53.013188  740119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 09:07:53.027530  740119 docker.go:218] disabling cri-docker service (if available) ...
	I1124 09:07:53.027605  740119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 09:07:53.043401  740119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 09:07:53.055944  740119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 09:07:53.138523  740119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 09:07:53.225021  740119 docker.go:234] disabling docker service ...
	I1124 09:07:53.225088  740119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 09:07:53.239839  740119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 09:07:53.253817  740119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 09:07:53.342228  740119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 09:07:53.434381  740119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 09:07:53.448642  740119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 09:07:53.463981  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:53.781061  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 09:07:53.791059  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 09:07:53.800164  740119 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 09:07:53.800220  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 09:07:53.809170  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:07:53.817850  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 09:07:53.827229  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 09:07:53.835766  740119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 09:07:53.843728  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 09:07:53.852452  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 09:07:53.861172  740119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 09:07:53.869750  740119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 09:07:53.876842  740119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 09:07:53.884022  740119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:07:53.964111  740119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 09:07:54.057102  740119 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 09:07:54.057193  740119 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 09:07:54.061455  740119 start.go:564] Will wait 60s for crictl version
	I1124 09:07:54.061535  740119 ssh_runner.go:195] Run: which crictl
	I1124 09:07:54.065270  740119 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 09:07:54.089954  740119 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 09:07:54.090014  740119 ssh_runner.go:195] Run: containerd --version
	I1124 09:07:54.111497  740119 ssh_runner.go:195] Run: containerd --version
	I1124 09:07:54.135282  740119 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1124 09:07:54.136314  740119 cli_runner.go:164] Run: docker network inspect newest-cni-654569 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 09:07:54.154057  740119 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 09:07:54.158283  740119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:07:54.170280  740119 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 09:07:49.857357  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:49.857392  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:49.872170  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:49.872205  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:49.906798  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:49.906829  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:49.944383  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:49.944413  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:49.977121  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:49.977151  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:50.023751  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:50.023790  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:50.092853  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:50.092874  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:07:50.092887  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:50.124349  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:50.124378  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:50.157974  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:50.158005  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:50.186445  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:07:50.186485  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:50.215211  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:50.215240  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:52.750543  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:52.751008  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:52.751076  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:52.751140  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:52.779222  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:52.779253  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:52.779259  685562 cri.go:89] found id: ""
	I1124 09:07:52.779270  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:52.779325  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.783396  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.787381  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:52.787433  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:52.819643  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:52.819665  685562 cri.go:89] found id: ""
	I1124 09:07:52.819675  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:52.819727  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.824397  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:52.824483  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:52.852859  685562 cri.go:89] found id: ""
	I1124 09:07:52.852884  685562 logs.go:282] 0 containers: []
	W1124 09:07:52.852893  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:52.852901  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:52.852958  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:52.880546  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:52.880574  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:52.880581  685562 cri.go:89] found id: ""
	I1124 09:07:52.880596  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:52.880655  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.884728  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.888394  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:52.888449  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:52.913593  685562 cri.go:89] found id: ""
	I1124 09:07:52.913619  685562 logs.go:282] 0 containers: []
	W1124 09:07:52.913629  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:52.913637  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:52.913691  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:52.940155  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:52.940175  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:52.940181  685562 cri.go:89] found id: ""
	I1124 09:07:52.940192  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:52.940249  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.944598  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:52.948215  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:52.948283  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:52.975415  685562 cri.go:89] found id: ""
	I1124 09:07:52.975443  685562 logs.go:282] 0 containers: []
	W1124 09:07:52.975453  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:52.975491  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:52.975555  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:53.007272  685562 cri.go:89] found id: ""
	I1124 09:07:53.007301  685562 logs.go:282] 0 containers: []
	W1124 09:07:53.007312  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:53.007333  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:53.007347  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:07:53.121553  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:53.121586  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:53.188763  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:53.188783  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:53.188795  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:53.222509  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:53.222540  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:53.250796  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:53.250823  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:53.302451  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:53.302504  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:53.334584  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:53.334613  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:53.349579  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:07:53.349601  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:53.385162  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:53.385192  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:53.418890  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:53.418929  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:53.453244  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:07:53.453269  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:53.481875  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:53.481910  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:54.171385  740119 kubeadm.go:884] updating cluster {Name:newest-cni-654569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 09:07:54.171609  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:54.484998  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:54.798141  740119 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1124 09:07:55.115412  740119 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 09:07:55.115492  740119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 09:07:55.141981  740119 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 09:07:55.142005  740119 cache_images.go:86] Images are preloaded, skipping loading
	I1124 09:07:55.142015  740119 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1124 09:07:55.142138  740119 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-654569 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 09:07:55.142213  740119 ssh_runner.go:195] Run: sudo crictl info
	I1124 09:07:55.168053  740119 cni.go:84] Creating CNI manager for ""
	I1124 09:07:55.168076  740119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 09:07:55.168099  740119 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 09:07:55.168136  740119 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-654569 NodeName:newest-cni-654569 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 09:07:55.168268  740119 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-654569"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 09:07:55.168345  740119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1124 09:07:55.176299  740119 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 09:07:55.176368  740119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 09:07:55.184050  740119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1124 09:07:55.197168  740119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1124 09:07:55.209710  740119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1124 09:07:55.222578  740119 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 09:07:55.225988  740119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 09:07:55.235552  740119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:07:55.313702  740119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:07:55.335543  740119 certs.go:69] Setting up /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569 for IP: 192.168.103.2
	I1124 09:07:55.335565  740119 certs.go:195] generating shared ca certs ...
	I1124 09:07:55.335598  740119 certs.go:227] acquiring lock for ca certs: {Name:mk977567029a87925dffc7f909bfa5f74bf239fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:07:55.335764  740119 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key
	I1124 09:07:55.335825  740119 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key
	I1124 09:07:55.335838  740119 certs.go:257] generating profile certs ...
	I1124 09:07:55.335956  740119 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/client.key
	I1124 09:07:55.336043  740119 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/apiserver.key.7c762e30
	I1124 09:07:55.336093  740119 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/proxy-client.key
	I1124 09:07:55.336234  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem (1338 bytes)
	W1124 09:07:55.336298  740119 certs.go:480] ignoring /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524_empty.pem, impossibly tiny 0 bytes
	I1124 09:07:55.336312  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 09:07:55.336362  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/ca.pem (1082 bytes)
	I1124 09:07:55.336411  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/cert.pem (1123 bytes)
	I1124 09:07:55.336441  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/certs/key.pem (1675 bytes)
	I1124 09:07:55.336501  740119 certs.go:484] found cert: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem (1708 bytes)
	I1124 09:07:55.337131  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 09:07:55.356448  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 09:07:55.375062  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 09:07:55.393674  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 09:07:55.417631  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 09:07:55.439443  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 09:07:55.457653  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 09:07:55.475347  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/newest-cni-654569/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 09:07:55.493913  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 09:07:55.510946  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/certs/439524.pem --> /usr/share/ca-certificates/439524.pem (1338 bytes)
	I1124 09:07:55.529348  740119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/ssl/certs/4395242.pem --> /usr/share/ca-certificates/4395242.pem (1708 bytes)
	I1124 09:07:55.549329  740119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 09:07:55.564652  740119 ssh_runner.go:195] Run: openssl version
	I1124 09:07:55.571017  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4395242.pem && ln -fs /usr/share/ca-certificates/4395242.pem /etc/ssl/certs/4395242.pem"
	I1124 09:07:55.580738  740119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4395242.pem
	I1124 09:07:55.584597  740119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 08:38 /usr/share/ca-certificates/4395242.pem
	I1124 09:07:55.584654  740119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4395242.pem
	I1124 09:07:55.625418  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4395242.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 09:07:55.634285  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 09:07:55.645169  740119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:07:55.649484  740119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:07:55.649544  740119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 09:07:55.688419  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 09:07:55.698286  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/439524.pem && ln -fs /usr/share/ca-certificates/439524.pem /etc/ssl/certs/439524.pem"
	I1124 09:07:55.707646  740119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/439524.pem
	I1124 09:07:55.711576  740119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 08:38 /usr/share/ca-certificates/439524.pem
	I1124 09:07:55.711628  740119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/439524.pem
	I1124 09:07:55.746241  740119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/439524.pem /etc/ssl/certs/51391683.0"
	I1124 09:07:55.757113  740119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 09:07:55.761360  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 09:07:55.796496  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 09:07:55.833324  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 09:07:55.871233  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 09:07:55.928790  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 09:07:55.981088  740119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 09:07:56.038125  740119 kubeadm.go:401] StartCluster: {Name:newest-cni-654569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-654569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 09:07:56.038266  740119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 09:07:56.038340  740119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 09:07:56.089186  740119 cri.go:89] found id: "a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d"
	I1124 09:07:56.089214  740119 cri.go:89] found id: "4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2"
	I1124 09:07:56.089219  740119 cri.go:89] found id: "dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720"
	I1124 09:07:56.089225  740119 cri.go:89] found id: "75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b"
	I1124 09:07:56.089229  740119 cri.go:89] found id: "f4e1fceba7711096161d4a95501e91ea1d83cfe4c620e5995126dd9c543b960f"
	I1124 09:07:56.089246  740119 cri.go:89] found id: "3e84b165b0b37fab2be27fc4595dad9d25ec66c3a3f0b546bac1d95f55f60749"
	I1124 09:07:56.089251  740119 cri.go:89] found id: "158de48e001d34e944b0f5bc8cd62e5c78fdfe8edb46bdd955885f2b6b096c38"
	I1124 09:07:56.089255  740119 cri.go:89] found id: "e31cf74acac5f31b3b47fc57578c8eb5620c5f68b51d75b3d896d2fdc6759487"
	I1124 09:07:56.089258  740119 cri.go:89] found id: "a6a092f46c17fe1320efa54d0d748c6d5d89cbc4d13446b32d574312c288c0ff"
	I1124 09:07:56.089267  740119 cri.go:89] found id: ""
	I1124 09:07:56.089316  740119 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 09:07:56.130416  740119 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8","pid":863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8/rootfs","created":"2025-11-24T09:07:55.953337283Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-654569_c6dcb99e56c6b456784e4cc4e4a8aa33","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c6dcb99e56c6b456784e4cc4e4a8aa33"},"owner":"root"},{"ociVersion":"1.2.1","id":"3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef/rootfs","created":"2025-11-24T09:07:55.94638955Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef","io.kubernetes.cr
i.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-654569_d7c5b44497a828ab83d4aadcafefd5cb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d7c5b44497a828ab83d4aadcafefd5cb"},"owner":"root"},{"ociVersion":"1.2.1","id":"4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2","pid":958,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2/rootfs","created":"2025-11-24T09:07:56.065006307Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0","i
o.kubernetes.cri.sandbox-id":"3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d7c5b44497a828ab83d4aadcafefd5cb"},"owner":"root"},{"ociVersion":"1.2.1","id":"75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b","pid":937,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b/rootfs","created":"2025-11-24T09:07:56.048707764Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847","io.kubernetes.cri.s
andbox-name":"etcd-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4bfaebf212f3ea670ce06d699a6f1411"},"owner":"root"},{"ociVersion":"1.2.1","id":"a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d/rootfs","created":"2025-11-24T09:07:56.059491091Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-beta.0","io.kubernetes.cri.sandbox-id":"085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kube
rnetes.cri.sandbox-uid":"c6dcb99e56c6b456784e4cc4e4a8aa33"},"owner":"root"},{"ociVersion":"1.2.1","id":"d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857","pid":824,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857/rootfs","created":"2025-11-24T09:07:55.935948038Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-654569_536efe6b5a7bd07d056d539cdc365e07","io.kubernetes.
cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"536efe6b5a7bd07d056d539cdc365e07"},"owner":"root"},{"ociVersion":"1.2.1","id":"dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720","pid":946,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720/rootfs","created":"2025-11-24T09:07:56.054487864Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-beta.0","io.kubernetes.cri.sandbox-id":"d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-654569","io.ku
bernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"536efe6b5a7bd07d056d539cdc365e07"},"owner":"root"},{"ociVersion":"1.2.1","id":"ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847","pid":810,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847/rootfs","created":"2025-11-24T09:07:55.928063262Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-654569_4bfae
bf212f3ea670ce06d699a6f1411","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-654569","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4bfaebf212f3ea670ce06d699a6f1411"},"owner":"root"}]
	I1124 09:07:56.130724  740119 cri.go:126] list returned 8 containers
	I1124 09:07:56.130741  740119 cri.go:129] container: {ID:085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8 Status:running}
	I1124 09:07:56.130760  740119 cri.go:131] skipping 085c505ffdf16ee3bbfba326bbae3ba905bdd6db5bbd0807b35249233a20deb8 - not in ps
	I1124 09:07:56.130766  740119 cri.go:129] container: {ID:3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef Status:running}
	I1124 09:07:56.130772  740119 cri.go:131] skipping 3d414345fbe3e85876ff52a53e0dd775bcc9e3538ec3de801217bc1f924750ef - not in ps
	I1124 09:07:56.130778  740119 cri.go:129] container: {ID:4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2 Status:running}
	I1124 09:07:56.130797  740119 cri.go:135] skipping {4d92cd75ec81a3e2b7fa0b35523d1d0fc3ccacfa3b38f5f98d2655b7a7c124a2 running}: state = "running", want "paused"
	I1124 09:07:56.130808  740119 cri.go:129] container: {ID:75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b Status:running}
	I1124 09:07:56.130814  740119 cri.go:135] skipping {75f98a0d1c57b6adbaedd7f4784510d16b26d25295e93b1412b611a036a9853b running}: state = "running", want "paused"
	I1124 09:07:56.130821  740119 cri.go:129] container: {ID:a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d Status:running}
	I1124 09:07:56.130829  740119 cri.go:135] skipping {a157a979800211fa2e48d8456dc72d55487fd44672e748a038a14bcc77c5426d running}: state = "running", want "paused"
	I1124 09:07:56.130835  740119 cri.go:129] container: {ID:d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857 Status:running}
	I1124 09:07:56.130842  740119 cri.go:131] skipping d7c5da3c9f380227ff338f1541bd7b1fd0403a24cadfca54891c190456351857 - not in ps
	I1124 09:07:56.130849  740119 cri.go:129] container: {ID:dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720 Status:running}
	I1124 09:07:56.130857  740119 cri.go:135] skipping {dee3d2e2ae24367219d65f5301765dfc5ce4b878b6bf6b20475c4530de6b6720 running}: state = "running", want "paused"
	I1124 09:07:56.130863  740119 cri.go:129] container: {ID:ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847 Status:running}
	I1124 09:07:56.130871  740119 cri.go:131] skipping ef2aeb6b71f6cc7a778fac614098b61c81c55c2c056631807202b2ea09d3a847 - not in ps
	I1124 09:07:56.130937  740119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 09:07:56.143034  740119 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 09:07:56.143057  740119 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 09:07:56.143107  740119 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 09:07:56.156401  740119 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 09:07:56.157955  740119 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-654569" does not appear in /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:07:56.158947  740119 kubeconfig.go:62] /home/jenkins/minikube-integration/21978-435860/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-654569" cluster setting kubeconfig missing "newest-cni-654569" context setting]
	I1124 09:07:56.161516  740119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:07:56.165172  740119 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 09:07:56.177468  740119 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1124 09:07:56.177509  740119 kubeadm.go:602] duration metric: took 34.445893ms to restartPrimaryControlPlane
	I1124 09:07:56.177544  740119 kubeadm.go:403] duration metric: took 139.430697ms to StartCluster
	I1124 09:07:56.177569  740119 settings.go:142] acquiring lock: {Name:mk02cbf979fc883a7cfa89d50f2f1c6cf88236e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:07:56.177697  740119 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:07:56.180208  740119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/kubeconfig: {Name:mk42183bd63f8b44881819ac352384aa0ef5afa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 09:07:56.180569  740119 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 09:07:56.181072  740119 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 09:07:56.181193  740119 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-654569"
	I1124 09:07:56.181213  740119 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-654569"
	W1124 09:07:56.181228  740119 addons.go:248] addon storage-provisioner should already be in state true
	I1124 09:07:56.181233  740119 addons.go:70] Setting default-storageclass=true in profile "newest-cni-654569"
	I1124 09:07:56.181253  740119 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-654569"
	I1124 09:07:56.181258  740119 host.go:66] Checking if "newest-cni-654569" exists ...
	I1124 09:07:56.181259  740119 addons.go:70] Setting dashboard=true in profile "newest-cni-654569"
	I1124 09:07:56.181277  740119 addons.go:239] Setting addon dashboard=true in "newest-cni-654569"
	W1124 09:07:56.181285  740119 addons.go:248] addon dashboard should already be in state true
	I1124 09:07:56.181350  740119 host.go:66] Checking if "newest-cni-654569" exists ...
	I1124 09:07:56.181589  740119 addons.go:70] Setting metrics-server=true in profile "newest-cni-654569"
	I1124 09:07:56.181618  740119 addons.go:239] Setting addon metrics-server=true in "newest-cni-654569"
	W1124 09:07:56.181627  740119 addons.go:248] addon metrics-server should already be in state true
	I1124 09:07:56.181656  740119 host.go:66] Checking if "newest-cni-654569" exists ...
	I1124 09:07:56.181211  740119 config.go:182] Loaded profile config "newest-cni-654569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:07:56.181598  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.181797  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.181818  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.182113  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.188544  740119 out.go:179] * Verifying Kubernetes components...
	I1124 09:07:56.190275  740119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 09:07:56.212861  740119 addons.go:239] Setting addon default-storageclass=true in "newest-cni-654569"
	W1124 09:07:56.213062  740119 addons.go:248] addon default-storageclass should already be in state true
	I1124 09:07:56.213130  740119 host.go:66] Checking if "newest-cni-654569" exists ...
	I1124 09:07:56.214362  740119 cli_runner.go:164] Run: docker container inspect newest-cni-654569 --format={{.State.Status}}
	I1124 09:07:56.218487  740119 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 09:07:56.218482  740119 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 09:07:56.219671  740119 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 09:07:56.220234  740119 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 09:07:56.220300  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:56.221641  740119 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 09:07:56.223298  740119 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 09:07:54.323974  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	W1124 09:07:56.325833  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	I1124 09:07:56.223300  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 09:07:56.223453  740119 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 09:07:56.223558  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:56.224350  740119 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:07:56.224372  740119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 09:07:56.224430  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:56.242662  740119 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 09:07:56.242685  740119 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 09:07:56.242745  740119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-654569
	I1124 09:07:56.256044  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:56.263870  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:56.266228  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:56.290380  740119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/newest-cni-654569/id_rsa Username:docker}
	I1124 09:07:56.375577  740119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 09:07:56.395131  740119 api_server.go:52] waiting for apiserver process to appear ...
	I1124 09:07:56.395217  740119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 09:07:56.405436  740119 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 09:07:56.406355  740119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 09:07:56.407356  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 09:07:56.407371  740119 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 09:07:56.408594  740119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 09:07:56.417254  740119 api_server.go:72] duration metric: took 236.638633ms to wait for apiserver process to appear ...
	I1124 09:07:56.417983  740119 api_server.go:88] waiting for apiserver healthz status ...
	I1124 09:07:56.418024  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:56.425934  740119 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 09:07:56.425958  740119 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 09:07:56.426098  740119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 09:07:56.431454  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 09:07:56.431492  740119 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 09:07:56.446759  740119 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:07:56.446786  740119 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 09:07:56.459386  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 09:07:56.459415  740119 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 09:07:56.471259  740119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 09:07:56.479755  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 09:07:56.479778  740119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 09:07:56.496377  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 09:07:56.496403  740119 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 09:07:56.512775  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 09:07:56.512802  740119 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 09:07:56.529546  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 09:07:56.529574  740119 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 09:07:56.543946  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 09:07:56.543970  740119 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 09:07:56.559581  740119 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:07:56.559607  740119 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 09:07:56.574571  740119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 09:07:58.027053  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:07:58.027087  740119 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:07:58.027101  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:58.039799  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 09:07:58.039831  740119 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 09:07:58.419149  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:58.424822  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:07:58.424853  740119 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:07:58.622336  740119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.213680166s)
	I1124 09:07:58.622395  740119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.196259725s)
	I1124 09:07:58.624475  740119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.153162875s)
	I1124 09:07:58.624505  740119 addons.go:495] Verifying addon metrics-server=true in "newest-cni-654569"
	I1124 09:07:58.624568  740119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.049945341s)
	I1124 09:07:58.628984  740119 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-654569 addons enable metrics-server
	
	I1124 09:07:58.634190  740119 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1124 09:07:58.635259  740119 addons.go:530] duration metric: took 2.454203545s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1124 09:07:58.918078  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:58.922902  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 09:07:58.922941  740119 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 09:07:59.418541  740119 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 09:07:59.422776  740119 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 09:07:59.423896  740119 api_server.go:141] control plane version: v1.35.0-beta.0
	I1124 09:07:59.423923  740119 api_server.go:131] duration metric: took 3.005920248s to wait for apiserver health ...
	I1124 09:07:59.423937  740119 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 09:07:59.427925  740119 system_pods.go:59] 9 kube-system pods found
	I1124 09:07:59.427952  740119 system_pods.go:61] "coredns-7d764666f9-x9q9b" [506d2b46-76b4-495b-92ec-1d61d12cdb7c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 09:07:59.427960  740119 system_pods.go:61] "etcd-newest-cni-654569" [0a522704-a865-4e7c-8ebe-d642c5a9818c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 09:07:59.427969  740119 system_pods.go:61] "kindnet-qnftx" [11feac68-231b-41fd-a5b6-cb38432ab914] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 09:07:59.427977  740119 system_pods.go:61] "kube-apiserver-newest-cni-654569" [792974fb-5baf-43b4-b16f-984afe8de703] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 09:07:59.427983  740119 system_pods.go:61] "kube-controller-manager-newest-cni-654569" [4bd5630b-c62e-4b79-83cb-ac16b0119af9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 09:07:59.427988  740119 system_pods.go:61] "kube-proxy-tnmqt" [c21f06f2-1c7b-4a84-ada1-ce50e281f77d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 09:07:59.427993  740119 system_pods.go:61] "kube-scheduler-newest-cni-654569" [eadf3127-15eb-4f9f-afc4-00c1e19cacca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 09:07:59.428001  740119 system_pods.go:61] "metrics-server-5d785b57d4-qhnmt" [ae201e6f-2fb5-4b64-a376-31b95b002461] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 09:07:59.428010  740119 system_pods.go:61] "storage-provisioner" [930332b4-361f-418c-abf4-8d05d08ef9dd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 09:07:59.428016  740119 system_pods.go:74] duration metric: took 4.072733ms to wait for pod list to return data ...
	I1124 09:07:59.428026  740119 default_sa.go:34] waiting for default service account to be created ...
	I1124 09:07:59.430199  740119 default_sa.go:45] found service account: "default"
	I1124 09:07:59.430224  740119 default_sa.go:55] duration metric: took 2.191389ms for default service account to be created ...
	I1124 09:07:59.430236  740119 kubeadm.go:587] duration metric: took 3.249627773s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 09:07:59.430252  740119 node_conditions.go:102] verifying NodePressure condition ...
	I1124 09:07:59.432586  740119 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 09:07:59.432612  740119 node_conditions.go:123] node cpu capacity is 8
	I1124 09:07:59.432631  740119 node_conditions.go:105] duration metric: took 2.37222ms to run NodePressure ...
	I1124 09:07:59.432647  740119 start.go:242] waiting for startup goroutines ...
	I1124 09:07:59.432661  740119 start.go:247] waiting for cluster config update ...
	I1124 09:07:59.432679  740119 start.go:256] writing updated cluster config ...
	I1124 09:07:59.432927  740119 ssh_runner.go:195] Run: rm -f paused
	I1124 09:07:59.491414  740119 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1124 09:07:59.492851  740119 out.go:179] * Done! kubectl is now configured to use "newest-cni-654569" cluster and "default" namespace by default
	I1124 09:07:56.020541  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:56.021147  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:56.021210  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:56.021265  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:56.067013  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:56.067049  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:56.067057  685562 cri.go:89] found id: ""
	I1124 09:07:56.067068  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:56.067133  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.072142  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.077032  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:56.077096  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:56.121815  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:56.121844  685562 cri.go:89] found id: ""
	I1124 09:07:56.121854  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:56.121916  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.127997  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:56.128077  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:56.168618  685562 cri.go:89] found id: ""
	I1124 09:07:56.168642  685562 logs.go:282] 0 containers: []
	W1124 09:07:56.168667  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:56.168677  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:56.168742  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:56.218281  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:56.218356  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:56.218373  685562 cri.go:89] found id: ""
	I1124 09:07:56.218393  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:56.218528  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.224636  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.229661  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:56.229765  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:56.293945  685562 cri.go:89] found id: ""
	I1124 09:07:56.293977  685562 logs.go:282] 0 containers: []
	W1124 09:07:56.293988  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:56.293996  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:56.294060  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:56.334478  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:56.334503  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:56.334509  685562 cri.go:89] found id: ""
	I1124 09:07:56.334519  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:56.334580  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.340444  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:56.345844  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:56.345926  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:56.380080  685562 cri.go:89] found id: ""
	I1124 09:07:56.380105  685562 logs.go:282] 0 containers: []
	W1124 09:07:56.380114  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:56.380122  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:56.380178  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:56.420110  685562 cri.go:89] found id: ""
	I1124 09:07:56.420138  685562 logs.go:282] 0 containers: []
	W1124 09:07:56.420156  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:56.420171  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:56.420193  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:56.442022  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:56.442066  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:56.490969  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:56.491011  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:56.527453  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:07:56.527506  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:07:56.562016  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:56.562048  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:07:56.660117  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:07:56.660153  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:07:56.718059  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:07:56.718087  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:07:56.718105  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:56.750284  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:56.750317  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:56.785923  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:07:56.785954  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:56.821311  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:07:56.821343  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:56.849832  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:56.849859  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:56.884094  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:56.884132  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:59.430422  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:07:59.430857  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:07:59.430924  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:07:59.430985  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:07:59.460698  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:07:59.460723  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:59.460729  685562 cri.go:89] found id: ""
	I1124 09:07:59.460739  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:07:59.460804  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.465196  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.469225  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:07:59.469304  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:07:59.502134  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:59.502174  685562 cri.go:89] found id: ""
	I1124 09:07:59.502186  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:07:59.502243  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.506739  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:07:59.506808  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:07:59.539002  685562 cri.go:89] found id: ""
	I1124 09:07:59.539033  685562 logs.go:282] 0 containers: []
	W1124 09:07:59.539045  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:07:59.539055  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:07:59.539149  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:07:59.568146  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:07:59.568167  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:07:59.568172  685562 cri.go:89] found id: ""
	I1124 09:07:59.568181  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:07:59.568248  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.572864  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.577269  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:07:59.577338  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:07:59.607818  685562 cri.go:89] found id: ""
	I1124 09:07:59.607848  685562 logs.go:282] 0 containers: []
	W1124 09:07:59.607860  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:07:59.607869  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:07:59.607928  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:07:59.638184  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:07:59.638205  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:59.638210  685562 cri.go:89] found id: ""
	I1124 09:07:59.638219  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:07:59.638278  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.642979  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:07:59.646971  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:07:59.647028  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:07:59.675306  685562 cri.go:89] found id: ""
	I1124 09:07:59.675330  685562 logs.go:282] 0 containers: []
	W1124 09:07:59.675338  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:07:59.675348  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:07:59.675396  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:07:59.702893  685562 cri.go:89] found id: ""
	I1124 09:07:59.702927  685562 logs.go:282] 0 containers: []
	W1124 09:07:59.702940  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:07:59.702954  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:07:59.702968  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:07:59.739374  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:07:59.739405  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:07:59.779375  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:07:59.779419  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 09:07:59.835861  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:07:59.835893  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1124 09:07:58.824571  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	W1124 09:08:01.324130  733323 pod_ready.go:104] pod "coredns-66bc5c9577-pj9dj" is not "Ready", error: <nil>
	I1124 09:07:59.927248  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:07:59.927276  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:07:59.943076  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:07:59.943104  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:07:59.981096  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:07:59.981130  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:08:00.010682  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:08:00.010711  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:08:00.048280  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:08:00.048318  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:08:00.082383  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:08:00.082424  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:08:00.120265  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:08:00.120295  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:08:00.191552  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:08:00.191582  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:08:00.191599  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:08:02.725051  685562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 09:08:02.725506  685562 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 09:08:02.725572  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 09:08:02.725639  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 09:08:02.760102  685562 cri.go:89] found id: "cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:08:02.760123  685562 cri.go:89] found id: "7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:08:02.760128  685562 cri.go:89] found id: ""
	I1124 09:08:02.760136  685562 logs.go:282] 2 containers: [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00]
	I1124 09:08:02.760187  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:08:02.764521  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:08:02.768698  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 09:08:02.768755  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 09:08:02.799438  685562 cri.go:89] found id: "b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:08:02.799482  685562 cri.go:89] found id: ""
	I1124 09:08:02.799500  685562 logs.go:282] 1 containers: [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2]
	I1124 09:08:02.799562  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:08:02.804335  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 09:08:02.804386  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 09:08:02.835356  685562 cri.go:89] found id: ""
	I1124 09:08:02.835386  685562 logs.go:282] 0 containers: []
	W1124 09:08:02.835394  685562 logs.go:284] No container was found matching "coredns"
	I1124 09:08:02.835402  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 09:08:02.835477  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 09:08:02.867935  685562 cri.go:89] found id: "b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:08:02.867973  685562 cri.go:89] found id: "beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:08:02.867980  685562 cri.go:89] found id: ""
	I1124 09:08:02.867991  685562 logs.go:282] 2 containers: [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9]
	I1124 09:08:02.868067  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:08:02.872837  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:08:02.877803  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 09:08:02.877878  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 09:08:02.912080  685562 cri.go:89] found id: ""
	I1124 09:08:02.912106  685562 logs.go:282] 0 containers: []
	W1124 09:08:02.912116  685562 logs.go:284] No container was found matching "kube-proxy"
	I1124 09:08:02.912124  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 09:08:02.912185  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 09:08:02.943291  685562 cri.go:89] found id: "d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:08:02.943317  685562 cri.go:89] found id: "c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:08:02.943322  685562 cri.go:89] found id: ""
	I1124 09:08:02.943334  685562 logs.go:282] 2 containers: [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0]
	I1124 09:08:02.943394  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:08:02.947702  685562 ssh_runner.go:195] Run: which crictl
	I1124 09:08:02.951865  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 09:08:02.951922  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 09:08:02.980119  685562 cri.go:89] found id: ""
	I1124 09:08:02.980158  685562 logs.go:282] 0 containers: []
	W1124 09:08:02.980169  685562 logs.go:284] No container was found matching "kindnet"
	I1124 09:08:02.980176  685562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 09:08:02.980223  685562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 09:08:03.010110  685562 cri.go:89] found id: ""
	I1124 09:08:03.010138  685562 logs.go:282] 0 containers: []
	W1124 09:08:03.010148  685562 logs.go:284] No container was found matching "storage-provisioner"
	I1124 09:08:03.010161  685562 logs.go:123] Gathering logs for dmesg ...
	I1124 09:08:03.010177  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 09:08:03.030907  685562 logs.go:123] Gathering logs for describe nodes ...
	I1124 09:08:03.031005  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 09:08:03.119334  685562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 09:08:03.119358  685562 logs.go:123] Gathering logs for kube-apiserver [cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6] ...
	I1124 09:08:03.119375  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf2b5d68f328c2a55cdab845829c6d105b971d0714ed17a5a90e96b7529628a6"
	I1124 09:08:03.156162  685562 logs.go:123] Gathering logs for kube-scheduler [b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17] ...
	I1124 09:08:03.156191  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b018c37b5155a45849bf7701c25cfd1ff2e5d08a4a174fd7447b3d1e5014bc17"
	I1124 09:08:03.188141  685562 logs.go:123] Gathering logs for kube-controller-manager [d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2] ...
	I1124 09:08:03.188176  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d699ab73a4055ff8087251df38dd45b8348d567240c50e72782e32ce3c71bbb2"
	I1124 09:08:03.219076  685562 logs.go:123] Gathering logs for kube-controller-manager [c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0] ...
	I1124 09:08:03.219107  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c70fdaa8d0b65a6cc40d923809782c40bad08a66e1cd7ef35c3bd63e2344a7d0"
	I1124 09:08:03.264267  685562 logs.go:123] Gathering logs for container status ...
	I1124 09:08:03.264295  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 09:08:03.312634  685562 logs.go:123] Gathering logs for kubelet ...
	I1124 09:08:03.312696  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 09:08:03.452700  685562 logs.go:123] Gathering logs for kube-apiserver [7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00] ...
	I1124 09:08:03.452735  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7359853367f0edc54ad7b43f974b25c5e084487a9f1f0e85d38c8ad9736fcd00"
	I1124 09:08:03.489747  685562 logs.go:123] Gathering logs for etcd [b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2] ...
	I1124 09:08:03.489773  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0f5e195a2427e1475b232369ca31232e850412d5ccf99c87ab9d6ef0d230ec2"
	I1124 09:08:03.529426  685562 logs.go:123] Gathering logs for kube-scheduler [beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9] ...
	I1124 09:08:03.529473  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 beba2c039cf143777ad7314b49e8a78d52025ed5525530635c9debdb1ab66ce9"
	I1124 09:08:03.566138  685562 logs.go:123] Gathering logs for containerd ...
	I1124 09:08:03.566169  685562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	0048863eeab5b       56cc512116c8f       10 seconds ago      Running             busybox                   0                   6a1c58c44dabd       busybox                                                default
	4cb7a2e1543a2       52546a367cc9e       15 seconds ago      Running             coredns                   0                   754bbf6ee037f       coredns-66bc5c9577-xrvmp                               kube-system
	7b6759161aaf7       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   d739ceffcd719       storage-provisioner                                    kube-system
	d61b328ab5ab1       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   94ba4ea8cc394       kindnet-b9gr6                                          kube-system
	d08299c781b5b       8aa150647e88a       26 seconds ago      Running             kube-proxy                0                   2f34fd49731c3       kube-proxy-5hvkq                                       kube-system
	8511ac48cd627       88320b5498ff2       36 seconds ago      Running             kube-scheduler            0                   2a5f4ee9cdbe8       kube-scheduler-default-k8s-diff-port-603918            kube-system
	dd669bd5eb5c8       a3e246e9556e9       36 seconds ago      Running             etcd                      0                   306d5a6f33d85       etcd-default-k8s-diff-port-603918                      kube-system
	ab596f3f89dfb       01e8bacf0f500       36 seconds ago      Running             kube-controller-manager   0                   8792115764e5c       kube-controller-manager-default-k8s-diff-port-603918   kube-system
	2360a77fd7012       a5f569d49a979       36 seconds ago      Running             kube-apiserver            0                   75341afc5f34d       kube-apiserver-default-k8s-diff-port-603918            kube-system
	
	
	==> containerd <==
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.478802026Z" level=info msg="Container 4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.479012008Z" level=info msg="CreateContainer within sandbox \"d739ceffcd719ee21dc72de12352bcc6b46a8ea7096e691b55001bcadbbe3d5b\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58\""
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.479545666Z" level=info msg="StartContainer for \"7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58\""
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.480346722Z" level=info msg="connecting to shim 7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58" address="unix:///run/containerd/s/d778d31c26635c66d6dc4f813da4e6a22952fbeba29440ec23af6ffefe8d0d08" protocol=ttrpc version=3
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.486155548Z" level=info msg="CreateContainer within sandbox \"754bbf6ee037faf2eb0ab5772f9d30688e7f23c89ac6c4b2ede2527106b6acca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7\""
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.486734729Z" level=info msg="StartContainer for \"4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7\""
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.487838064Z" level=info msg="connecting to shim 4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7" address="unix:///run/containerd/s/9462e11f343f3511322fa0215f82b2720128f229d0e5deb7bb15503f13750280" protocol=ttrpc version=3
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.534124607Z" level=info msg="StartContainer for \"7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58\" returns successfully"
	Nov 24 09:07:50 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:50.541247266Z" level=info msg="StartContainer for \"4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7\" returns successfully"
	Nov 24 09:07:52 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:52.963904471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7,Namespace:default,Attempt:0,}"
	Nov 24 09:07:52 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:52.995556019Z" level=info msg="connecting to shim 6a1c58c44dabd14a03b5dfe863c4973f78579ecb41a4fa7ac911166778977c19" address="unix:///run/containerd/s/3786ba091400f81e491ed3aac208c2bb9dc958d4a21390cd1a8551bca30a1796" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 09:07:53 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:53.072731700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7,Namespace:default,Attempt:0,} returns sandbox id \"6a1c58c44dabd14a03b5dfe863c4973f78579ecb41a4fa7ac911166778977c19\""
	Nov 24 09:07:53 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:53.074976810Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.615774305Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.616426464Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.617548446Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.619356653Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.619993882Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.544967961s"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.620039028Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.624039708Z" level=info msg="CreateContainer within sandbox \"6a1c58c44dabd14a03b5dfe863c4973f78579ecb41a4fa7ac911166778977c19\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.630138239Z" level=info msg="Container 0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.635886425Z" level=info msg="CreateContainer within sandbox \"6a1c58c44dabd14a03b5dfe863c4973f78579ecb41a4fa7ac911166778977c19\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0\""
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.636348018Z" level=info msg="StartContainer for \"0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0\""
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.637133578Z" level=info msg="connecting to shim 0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0" address="unix:///run/containerd/s/3786ba091400f81e491ed3aac208c2bb9dc958d4a21390cd1a8551bca30a1796" protocol=ttrpc version=3
	Nov 24 09:07:55 default-k8s-diff-port-603918 containerd[663]: time="2025-11-24T09:07:55.695423794Z" level=info msg="StartContainer for \"0048863eeab5b5eb4ce7dee195c0d1f07faf77d0f583e8b70f21b5cafcbe8dc0\" returns successfully"
	
	
	==> coredns [4cb7a2e1543a2f315a2834a2bbafb7016a0d6e1122b995a93ef534144c83b8d7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56982 - 58653 "HINFO IN 4688269613880167346.4194427648079874584. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020568653s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-603918
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-603918
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393ee3e0b845623107dce6cda4f48ffd5c3d1811
	                    minikube.k8s.io/name=default-k8s-diff-port-603918
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T09_07_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 09:07:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-603918
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 09:08:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 09:08:04 +0000   Mon, 24 Nov 2025 09:07:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 09:08:04 +0000   Mon, 24 Nov 2025 09:07:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 09:08:04 +0000   Mon, 24 Nov 2025 09:07:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 09:08:04 +0000   Mon, 24 Nov 2025 09:07:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-603918
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                18145d9a-fbb9-4960-a6df-c69396b8f79c
	  Boot ID:                    f052cd47-57de-4521-b5fb-139979fdced9
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-xrvmp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-603918                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-b9gr6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-603918             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-603918    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-5hvkq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-603918             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node default-k8s-diff-port-603918 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node default-k8s-diff-port-603918 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node default-k8s-diff-port-603918 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node default-k8s-diff-port-603918 event: Registered Node default-k8s-diff-port-603918 in Controller
	  Normal  NodeReady                15s   kubelet          Node default-k8s-diff-port-603918 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [dd669bd5eb5c858534503bf9a36b221ef9818ee825b047bcb02a309c174d8b48] <==
	{"level":"warn","ts":"2025-11-24T09:07:30.327907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.348098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.354371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.362872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.370490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.377810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.386421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.394604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.401242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.408583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.416123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.423135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.430751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.453228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.460247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.467236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:30.522898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T09:07:34.346220Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.839529ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597273249824879 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" value_size:124 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T09:07:34.346355Z","caller":"traceutil/trace.go:172","msg":"trace[1201803426] transaction","detail":"{read_only:false; response_revision:260; number_of_response:1; }","duration":"150.764396ms","start":"2025-11-24T09:07:34.195577Z","end":"2025-11-24T09:07:34.346341Z","steps":["trace[1201803426] 'process raft request'  (duration: 38.451987ms)","trace[1201803426] 'compare'  (duration: 111.720232ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T09:07:34.554980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.226305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-11-24T09:07:34.555037Z","caller":"traceutil/trace.go:172","msg":"trace[1924137252] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:261; }","duration":"118.302739ms","start":"2025-11-24T09:07:34.436720Z","end":"2025-11-24T09:07:34.555022Z","steps":["trace[1924137252] 'agreement among raft nodes before linearized reading'  (duration: 52.872904ms)","trace[1924137252] 'range keys from in-memory index tree'  (duration: 65.264136ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:07:34.555129Z","caller":"traceutil/trace.go:172","msg":"trace[1620803049] transaction","detail":"{read_only:false; response_revision:262; number_of_response:1; }","duration":"189.317797ms","start":"2025-11-24T09:07:34.365775Z","end":"2025-11-24T09:07:34.555093Z","steps":["trace[1620803049] 'process raft request'  (duration: 123.831751ms)","trace[1620803049] 'compare'  (duration: 65.302143ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:07:34.734315Z","caller":"traceutil/trace.go:172","msg":"trace[975367433] transaction","detail":"{read_only:false; response_revision:263; number_of_response:1; }","duration":"139.111949ms","start":"2025-11-24T09:07:34.595180Z","end":"2025-11-24T09:07:34.734292Z","steps":["trace[975367433] 'process raft request'  (duration: 83.348894ms)","trace[975367433] 'compare'  (duration: 55.630472ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T09:07:34.821844Z","caller":"traceutil/trace.go:172","msg":"trace[942765471] transaction","detail":"{read_only:false; number_of_response:0; response_revision:263; }","duration":"122.875835ms","start":"2025-11-24T09:07:34.698933Z","end":"2025-11-24T09:07:34.821809Z","steps":["trace[942765471] 'process raft request'  (duration: 122.768079ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T09:07:34.821906Z","caller":"traceutil/trace.go:172","msg":"trace[964689619] transaction","detail":"{read_only:false; number_of_response:0; response_revision:263; }","duration":"122.955322ms","start":"2025-11-24T09:07:34.698933Z","end":"2025-11-24T09:07:34.821888Z","steps":["trace[964689619] 'process raft request'  (duration: 122.83712ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:08:05 up  3:50,  0 user,  load average: 3.48, 3.63, 9.85
	Linux default-k8s-diff-port-603918 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d61b328ab5ab1269962ba5787c878a3ecd23c246f9a62364bfb4b78afc389098] <==
	I1124 09:07:39.814803       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 09:07:39.815136       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 09:07:39.815276       1 main.go:148] setting mtu 1500 for CNI 
	I1124 09:07:39.815294       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 09:07:39.815319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T09:07:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 09:07:40.101193       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 09:07:40.101229       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 09:07:40.101254       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 09:07:40.110763       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 09:07:40.501891       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 09:07:40.501928       1 metrics.go:72] Registering metrics
	I1124 09:07:40.502008       1 controller.go:711] "Syncing nftables rules"
	I1124 09:07:50.022679       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:07:50.022724       1 main.go:301] handling current node
	I1124 09:08:00.022555       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 09:08:00.022617       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2360a77fd7012a398acfbb7b6a080849121db124c72a7255c5b1d2f454bee8e8] <==
	I1124 09:07:31.132172       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 09:07:31.136040       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:07:31.136063       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 09:07:31.141192       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:07:31.142578       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 09:07:31.236592       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 09:07:31.951000       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 09:07:31.959171       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 09:07:31.959724       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 09:07:32.584208       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 09:07:32.626372       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 09:07:32.739252       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 09:07:32.745183       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 09:07:32.746252       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 09:07:32.750287       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 09:07:32.949296       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 09:07:33.774818       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 09:07:33.788343       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 09:07:33.797967       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 09:07:38.456274       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:07:38.463175       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 09:07:38.748721       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:07:38.748722       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 09:07:38.951300       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1124 09:08:02.772709       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:50882: use of closed network connection
	
	
	==> kube-controller-manager [ab596f3f89dfbaa2fced115c34da995ab6bdbb1e8f8fdf34ac0ab8f1fbbe292c] <==
	I1124 09:07:37.954863       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-603918"
	I1124 09:07:37.954937       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 09:07:37.950531       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 09:07:37.950549       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 09:07:37.950561       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 09:07:37.950583       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 09:07:37.950602       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 09:07:37.950611       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 09:07:37.950629       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 09:07:37.950641       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 09:07:37.950659       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 09:07:37.957684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 09:07:37.950668       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 09:07:37.950676       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 09:07:37.951952       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 09:07:37.951982       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 09:07:37.957860       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 09:07:37.958583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 09:07:37.959926       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 09:07:37.964432       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 09:07:37.974514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:07:37.974537       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 09:07:37.974548       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 09:07:37.974576       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 09:07:52.957603       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d08299c781b5bb99b671160da6d283abbaf60a124f5358cb647fbe5f2a4706bc] <==
	I1124 09:07:39.361203       1 server_linux.go:53] "Using iptables proxy"
	I1124 09:07:39.424742       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 09:07:39.525130       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 09:07:39.525172       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 09:07:39.525309       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 09:07:39.546419       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 09:07:39.546498       1 server_linux.go:132] "Using iptables Proxier"
	I1124 09:07:39.551771       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 09:07:39.552128       1 server.go:527] "Version info" version="v1.34.2"
	I1124 09:07:39.552171       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 09:07:39.553631       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 09:07:39.553655       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 09:07:39.553685       1 config.go:200] "Starting service config controller"
	I1124 09:07:39.553691       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 09:07:39.553773       1 config.go:106] "Starting endpoint slice config controller"
	I1124 09:07:39.553797       1 config.go:309] "Starting node config controller"
	I1124 09:07:39.553808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 09:07:39.553815       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 09:07:39.553817       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 09:07:39.654672       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 09:07:39.654696       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 09:07:39.654776       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8511ac48cd627d9ff60b0149b23f93346ef69d770e4169764582c1c9a39fd342] <==
	E1124 09:07:30.994226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:07:30.994537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:07:30.994691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:07:30.995025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:07:30.995356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 09:07:30.995404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:07:30.995502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 09:07:31.834821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 09:07:31.854357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 09:07:31.916714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 09:07:31.968305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 09:07:31.995644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 09:07:32.008685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 09:07:32.034293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 09:07:32.045688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 09:07:32.078528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 09:07:32.114248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 09:07:32.144014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 09:07:32.163314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 09:07:32.163412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 09:07:32.173613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 09:07:32.286339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 09:07:32.380746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 09:07:32.411245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1124 09:07:34.791706       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 09:07:34 default-k8s-diff-port-603918 kubelet[1469]: E1124 09:07:34.826567    1469 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-603918\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-603918"
	Nov 24 09:07:34 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:34.843353    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-603918" podStartSLOduration=1.8432976110000001 podStartE2EDuration="1.843297611s" podCreationTimestamp="2025-11-24 09:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:34.841934644 +0000 UTC m=+1.285704943" watchObservedRunningTime="2025-11-24 09:07:34.843297611 +0000 UTC m=+1.287067887"
	Nov 24 09:07:34 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:34.856503    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-603918" podStartSLOduration=1.8564835579999999 podStartE2EDuration="1.856483558s" podCreationTimestamp="2025-11-24 09:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:34.856386383 +0000 UTC m=+1.300156679" watchObservedRunningTime="2025-11-24 09:07:34.856483558 +0000 UTC m=+1.300253835"
	Nov 24 09:07:34 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:34.883844    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-603918" podStartSLOduration=1.883822646 podStartE2EDuration="1.883822646s" podCreationTimestamp="2025-11-24 09:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:34.881332354 +0000 UTC m=+1.325102631" watchObservedRunningTime="2025-11-24 09:07:34.883822646 +0000 UTC m=+1.327592930"
	Nov 24 09:07:37 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:37.948739    1469 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 09:07:37 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:37.952798    1469 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788750    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66cc3c18-98b4-47fa-a69c-90041bacd287-kube-proxy\") pod \"kube-proxy-5hvkq\" (UID: \"66cc3c18-98b4-47fa-a69c-90041bacd287\") " pod="kube-system/kube-proxy-5hvkq"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788797    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53f892c9-f95c-488d-886b-87b4d981b058-xtables-lock\") pod \"kindnet-b9gr6\" (UID: \"53f892c9-f95c-488d-886b-87b4d981b058\") " pod="kube-system/kindnet-b9gr6"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788812    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53f892c9-f95c-488d-886b-87b4d981b058-lib-modules\") pod \"kindnet-b9gr6\" (UID: \"53f892c9-f95c-488d-886b-87b4d981b058\") " pod="kube-system/kindnet-b9gr6"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788833    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkzkh\" (UniqueName: \"kubernetes.io/projected/53f892c9-f95c-488d-886b-87b4d981b058-kube-api-access-tkzkh\") pod \"kindnet-b9gr6\" (UID: \"53f892c9-f95c-488d-886b-87b4d981b058\") " pod="kube-system/kindnet-b9gr6"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788945    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66cc3c18-98b4-47fa-a69c-90041bacd287-xtables-lock\") pod \"kube-proxy-5hvkq\" (UID: \"66cc3c18-98b4-47fa-a69c-90041bacd287\") " pod="kube-system/kube-proxy-5hvkq"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.788970    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/53f892c9-f95c-488d-886b-87b4d981b058-cni-cfg\") pod \"kindnet-b9gr6\" (UID: \"53f892c9-f95c-488d-886b-87b4d981b058\") " pod="kube-system/kindnet-b9gr6"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.789040    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k54s8\" (UniqueName: \"kubernetes.io/projected/66cc3c18-98b4-47fa-a69c-90041bacd287-kube-api-access-k54s8\") pod \"kube-proxy-5hvkq\" (UID: \"66cc3c18-98b4-47fa-a69c-90041bacd287\") " pod="kube-system/kube-proxy-5hvkq"
	Nov 24 09:07:38 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:38.789074    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66cc3c18-98b4-47fa-a69c-90041bacd287-lib-modules\") pod \"kube-proxy-5hvkq\" (UID: \"66cc3c18-98b4-47fa-a69c-90041bacd287\") " pod="kube-system/kube-proxy-5hvkq"
	Nov 24 09:07:39 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:39.787884    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5hvkq" podStartSLOduration=1.787863712 podStartE2EDuration="1.787863712s" podCreationTimestamp="2025-11-24 09:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:39.787832474 +0000 UTC m=+6.231602750" watchObservedRunningTime="2025-11-24 09:07:39.787863712 +0000 UTC m=+6.231633989"
	Nov 24 09:07:39 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:39.800158    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-b9gr6" podStartSLOduration=1.8001378670000001 podStartE2EDuration="1.800137867s" podCreationTimestamp="2025-11-24 09:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:39.79985624 +0000 UTC m=+6.243626520" watchObservedRunningTime="2025-11-24 09:07:39.800137867 +0000 UTC m=+6.243908144"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.039378    1469 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.171891    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptz72\" (UniqueName: \"kubernetes.io/projected/1081180d-32ee-417f-aea3-ba27c3ee7c30-kube-api-access-ptz72\") pod \"storage-provisioner\" (UID: \"1081180d-32ee-417f-aea3-ba27c3ee7c30\") " pod="kube-system/storage-provisioner"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.171943    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1081180d-32ee-417f-aea3-ba27c3ee7c30-tmp\") pod \"storage-provisioner\" (UID: \"1081180d-32ee-417f-aea3-ba27c3ee7c30\") " pod="kube-system/storage-provisioner"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.171962    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33252e00-03f6-4116-98b4-ffd795b3bce8-config-volume\") pod \"coredns-66bc5c9577-xrvmp\" (UID: \"33252e00-03f6-4116-98b4-ffd795b3bce8\") " pod="kube-system/coredns-66bc5c9577-xrvmp"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.171978    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgkf\" (UniqueName: \"kubernetes.io/projected/33252e00-03f6-4116-98b4-ffd795b3bce8-kube-api-access-4vgkf\") pod \"coredns-66bc5c9577-xrvmp\" (UID: \"33252e00-03f6-4116-98b4-ffd795b3bce8\") " pod="kube-system/coredns-66bc5c9577-xrvmp"
	Nov 24 09:07:50 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:50.744573    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xrvmp" podStartSLOduration=11.744549804 podStartE2EDuration="11.744549804s" podCreationTimestamp="2025-11-24 09:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:50.744300654 +0000 UTC m=+17.188070934" watchObservedRunningTime="2025-11-24 09:07:50.744549804 +0000 UTC m=+17.188320081"
	Nov 24 09:07:52 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:52.650176    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.65014873 podStartE2EDuration="14.65014873s" podCreationTimestamp="2025-11-24 09:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:50.762642303 +0000 UTC m=+17.206412602" watchObservedRunningTime="2025-11-24 09:07:52.65014873 +0000 UTC m=+19.093919006"
	Nov 24 09:07:52 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:52.688323    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnq6\" (UniqueName: \"kubernetes.io/projected/4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7-kube-api-access-gxnq6\") pod \"busybox\" (UID: \"4581197a-228b-4f7d-a2bc-a5ef7b7eb2a7\") " pod="default/busybox"
	Nov 24 09:07:55 default-k8s-diff-port-603918 kubelet[1469]: I1124 09:07:55.763587    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.21703949 podStartE2EDuration="3.763566605s" podCreationTimestamp="2025-11-24 09:07:52 +0000 UTC" firstStartedPulling="2025-11-24 09:07:53.074412301 +0000 UTC m=+19.518182586" lastFinishedPulling="2025-11-24 09:07:55.620939428 +0000 UTC m=+22.064709701" observedRunningTime="2025-11-24 09:07:55.762989698 +0000 UTC m=+22.206759992" watchObservedRunningTime="2025-11-24 09:07:55.763566605 +0000 UTC m=+22.207336884"
	
	
	==> storage-provisioner [7b6759161aaf750bd83cd8f574e0289de4f56c8660bfd1a4f8c9fef29a584e58] <==
	I1124 09:07:50.545602       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 09:07:50.553411       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 09:07:50.553518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 09:07:50.555676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:50.561356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:07:50.561559       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 09:07:50.561702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba6dd8d8-4ce9-40d3-9df4-feec65d10000", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-603918_cd8e94d8-c639-4b1f-8d52-71d384f58406 became leader
	I1124 09:07:50.561751       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-603918_cd8e94d8-c639-4b1f-8d52-71d384f58406!
	W1124 09:07:50.563681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:50.566772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 09:07:50.662270       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-603918_cd8e94d8-c639-4b1f-8d52-71d384f58406!
	W1124 09:07:52.569721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:52.576609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:54.580296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:54.584090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:56.587718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:56.592251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:58.596203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:07:58.600645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:00.604133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:00.609198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:02.612965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:02.617349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:04.620984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 09:08:04.625837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-603918 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.28s)

                                                
                                    

Test pass (384/420)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 17.33
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 11.73
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 13.91
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0.93
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.41
30 TestBinaryMirror 0.88
31 TestOffline 57.4
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 132.38
38 TestAddons/serial/Volcano 40.3
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 9.48
44 TestAddons/parallel/Registry 15.53
45 TestAddons/parallel/RegistryCreds 0.7
46 TestAddons/parallel/Ingress 20.12
47 TestAddons/parallel/InspektorGadget 10.75
48 TestAddons/parallel/MetricsServer 5.74
50 TestAddons/parallel/CSI 59.67
51 TestAddons/parallel/Headlamp 16.79
52 TestAddons/parallel/CloudSpanner 5.52
53 TestAddons/parallel/LocalPath 14.17
54 TestAddons/parallel/NvidiaDevicePlugin 5.49
55 TestAddons/parallel/Yakd 10.71
56 TestAddons/parallel/AmdGpuDevicePlugin 5.51
57 TestAddons/StoppedEnableDisable 12.62
58 TestCertOptions 29.67
59 TestCertExpiration 218.9
61 TestForceSystemdFlag 35.72
62 TestForceSystemdEnv 40.69
63 TestDockerEnvContainerd 40.4
67 TestErrorSpam/setup 25.22
68 TestErrorSpam/start 0.68
69 TestErrorSpam/status 0.98
70 TestErrorSpam/pause 1.5
71 TestErrorSpam/unpause 1.54
72 TestErrorSpam/stop 2.09
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 39.57
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 8.07
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.49
84 TestFunctional/serial/CacheCmd/cache/add_local 1.99
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 44.26
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.21
95 TestFunctional/serial/LogsFileCmd 1.21
96 TestFunctional/serial/InvalidService 4.06
98 TestFunctional/parallel/ConfigCmd 0.5
99 TestFunctional/parallel/DashboardCmd 9.26
100 TestFunctional/parallel/DryRun 0.42
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 1.06
106 TestFunctional/parallel/ServiceCmdConnect 13.71
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 34.94
110 TestFunctional/parallel/SSHCmd 0.64
111 TestFunctional/parallel/CpCmd 1.81
112 TestFunctional/parallel/MySQL 23.99
113 TestFunctional/parallel/FileSync 0.33
114 TestFunctional/parallel/CertSync 1.83
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
122 TestFunctional/parallel/License 0.52
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.52
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
131 TestFunctional/parallel/ImageCommands/ImageBuild 3.94
132 TestFunctional/parallel/ImageCommands/Setup 1.93
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.25
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.97
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
143 TestFunctional/parallel/ProfileCmd/profile_list 0.4
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
156 TestFunctional/parallel/ServiceCmd/List 0.92
157 TestFunctional/parallel/MountCmd/any-port 7.74
158 TestFunctional/parallel/ServiceCmd/JSONOutput 0.92
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
160 TestFunctional/parallel/ServiceCmd/Format 0.55
161 TestFunctional/parallel/ServiceCmd/URL 0.58
162 TestFunctional/parallel/MountCmd/specific-port 2.17
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 46.91
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 7.23
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.59
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.54
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 35.76
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.22
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.23
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.33
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.51
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 14.49
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.17
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 8.69
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.19
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 32.59
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.6
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.93
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 18.91
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.35
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.95
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.66
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.37
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.5
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.27
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.27
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.28
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.98
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.93
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.18
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 16.16
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.21
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 19.22
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.18
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 2.19
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.51
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.69
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.42
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.56
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.56
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.59
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.55
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.6
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.43
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.45
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 7.91
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.42
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.93
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.08
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 122.06
266 TestMultiControlPlane/serial/DeployApp 5.9
267 TestMultiControlPlane/serial/PingHostFromPods 1.2
268 TestMultiControlPlane/serial/AddWorkerNode 24.43
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
271 TestMultiControlPlane/serial/CopyFile 17.85
272 TestMultiControlPlane/serial/StopSecondaryNode 12.71
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
274 TestMultiControlPlane/serial/RestartSecondaryNode 9.02
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 97.29
277 TestMultiControlPlane/serial/DeleteSecondaryNode 9.44
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
279 TestMultiControlPlane/serial/StopCluster 36.11
280 TestMultiControlPlane/serial/RestartCluster 57.9
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
282 TestMultiControlPlane/serial/AddSecondaryNode 41.84
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
288 TestJSONOutput/start/Command 39.42
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Command 0.66
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Command 0.6
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 5.85
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 33.68
314 TestKicCustomNetwork/use_default_bridge_network 24.52
315 TestKicExistingNetwork 26.48
316 TestKicCustomSubnet 26.54
317 TestKicStaticIP 27.23
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 53.43
322 TestMountStart/serial/StartWithMountFirst 7.36
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 4.72
325 TestMountStart/serial/VerifyMountSecond 0.29
326 TestMountStart/serial/DeleteFirst 1.72
327 TestMountStart/serial/VerifyMountPostDelete 0.29
328 TestMountStart/serial/Stop 1.27
329 TestMountStart/serial/RestartStopped 7.97
330 TestMountStart/serial/VerifyMountPostStop 0.29
333 TestMultiNode/serial/FreshStart2Nodes 65.23
334 TestMultiNode/serial/DeployApp2Nodes 5.42
335 TestMultiNode/serial/PingHostFrom2Pods 0.82
336 TestMultiNode/serial/AddNode 25.94
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.68
339 TestMultiNode/serial/CopyFile 10.03
340 TestMultiNode/serial/StopNode 2.29
341 TestMultiNode/serial/StartAfterStop 7.23
342 TestMultiNode/serial/RestartKeepsNodes 78
343 TestMultiNode/serial/DeleteNode 5.27
344 TestMultiNode/serial/StopMultiNode 24.02
345 TestMultiNode/serial/RestartMultiNode 49.7
346 TestMultiNode/serial/ValidateNameConflict 24.23
351 TestPreload 119.83
353 TestScheduledStopUnix 97.76
356 TestInsufficientStorage 11.66
357 TestRunningBinaryUpgrade 55.86
359 TestKubernetesUpgrade 306.45
360 TestMissingContainerUpgrade 94.87
361 TestStoppedBinaryUpgrade/Setup 3.24
363 TestPause/serial/Start 53.48
364 TestStoppedBinaryUpgrade/Upgrade 110.51
365 TestPause/serial/SecondStartNoReconfiguration 10.2
366 TestPause/serial/Pause 1.86
367 TestPause/serial/VerifyStatus 0.44
368 TestPause/serial/Unpause 1.06
369 TestPause/serial/PauseAgain 0.87
370 TestPause/serial/DeletePaused 2.91
371 TestPause/serial/VerifyDeletedResources 0.91
372 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
381 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
382 TestNoKubernetes/serial/StartWithK8s 25.24
383 TestNoKubernetes/serial/StartWithStopK8s 22.38
384 TestNoKubernetes/serial/Start 6.9
385 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
386 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
387 TestNoKubernetes/serial/ProfileList 1.96
388 TestNoKubernetes/serial/Stop 2.15
389 TestNoKubernetes/serial/StartNoArgs 6.53
390 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
398 TestNetworkPlugins/group/false 3.67
403 TestStartStop/group/old-k8s-version/serial/FirstStart 47.73
405 TestStartStop/group/no-preload/serial/FirstStart 48.51
408 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.95
409 TestStartStop/group/old-k8s-version/serial/Stop 12.09
410 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
411 TestStartStop/group/no-preload/serial/Stop 12.04
412 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
413 TestStartStop/group/old-k8s-version/serial/SecondStart 51.97
414 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
415 TestStartStop/group/no-preload/serial/SecondStart 48.67
417 TestStartStop/group/embed-certs/serial/FirstStart 46.61
418 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
419 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
421 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
422 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
423 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1.26
424 TestStartStop/group/no-preload/serial/Pause 3.35
425 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
426 TestStartStop/group/old-k8s-version/serial/Pause 3.45
427 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
428 TestStartStop/group/embed-certs/serial/Stop 12.43
430 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.63
432 TestStartStop/group/newest-cni/serial/FirstStart 34.72
433 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
434 TestStartStop/group/embed-certs/serial/SecondStart 48.24
435 TestStartStop/group/newest-cni/serial/DeployApp 0
436 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
437 TestStartStop/group/newest-cni/serial/Stop 1.43
438 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
439 TestStartStop/group/newest-cni/serial/SecondStart 11.84
441 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
442 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
443 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.41
444 TestStartStop/group/newest-cni/serial/Pause 2.74
445 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
446 TestNetworkPlugins/group/auto/Start 42.39
447 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.27
448 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
449 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
450 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
451 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57.99
452 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 1.19
453 TestStartStop/group/embed-certs/serial/Pause 3.21
454 TestNetworkPlugins/group/kindnet/Start 48.14
455 TestNetworkPlugins/group/auto/KubeletFlags 0.31
456 TestNetworkPlugins/group/auto/NetCatPod 9.23
457 TestNetworkPlugins/group/calico/Start 53.08
458 TestNetworkPlugins/group/auto/DNS 0.13
459 TestNetworkPlugins/group/auto/Localhost 0.11
460 TestNetworkPlugins/group/auto/HairPin 0.12
461 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
462 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
463 TestNetworkPlugins/group/custom-flannel/Start 63.44
464 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
465 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
466 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
467 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.27
468 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.45
469 TestNetworkPlugins/group/kindnet/DNS 0.17
470 TestNetworkPlugins/group/kindnet/Localhost 0.14
471 TestNetworkPlugins/group/kindnet/HairPin 0.29
472 TestNetworkPlugins/group/enable-default-cni/Start 66.38
473 TestNetworkPlugins/group/calico/ControllerPod 6.01
474 TestNetworkPlugins/group/calico/KubeletFlags 0.36
475 TestNetworkPlugins/group/calico/NetCatPod 9.3
476 TestNetworkPlugins/group/flannel/Start 54.17
477 TestNetworkPlugins/group/calico/DNS 0.16
478 TestNetworkPlugins/group/calico/Localhost 0.13
479 TestNetworkPlugins/group/calico/HairPin 0.14
480 TestNetworkPlugins/group/bridge/Start 38.78
481 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
482 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
483 TestNetworkPlugins/group/custom-flannel/DNS 0.13
484 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
485 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
486 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
487 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.22
488 TestNetworkPlugins/group/flannel/ControllerPod 6.01
489 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
490 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
491 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
492 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
493 TestNetworkPlugins/group/flannel/NetCatPod 9.18
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
495 TestNetworkPlugins/group/bridge/NetCatPod 8.19
496 TestNetworkPlugins/group/flannel/DNS 0.14
497 TestNetworkPlugins/group/flannel/Localhost 0.12
498 TestNetworkPlugins/group/flannel/HairPin 0.11
499 TestNetworkPlugins/group/bridge/DNS 0.15
500 TestNetworkPlugins/group/bridge/Localhost 0.11
501 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (17.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-318255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-318255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.328365309s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (17.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 08:29:24.199199  439524 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1124 08:29:24.199292  439524 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-318255
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-318255: exit status 85 (74.491239ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-318255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-318255 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:29:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:29:06.929521  439536 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:29:06.930158  439536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:06.930176  439536 out.go:374] Setting ErrFile to fd 2...
	I1124 08:29:06.930184  439536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:06.930685  439536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	W1124 08:29:06.931206  439536 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21978-435860/.minikube/config/config.json: open /home/jenkins/minikube-integration/21978-435860/.minikube/config/config.json: no such file or directory
	I1124 08:29:06.931809  439536 out.go:368] Setting JSON to true
	I1124 08:29:06.932901  439536 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11483,"bootTime":1763961464,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:29:06.933005  439536 start.go:143] virtualization: kvm guest
	I1124 08:29:06.936785  439536 out.go:99] [download-only-318255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1124 08:29:06.936910  439536 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 08:29:06.936964  439536 notify.go:221] Checking for updates...
	I1124 08:29:06.938222  439536 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:29:06.939400  439536 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:29:06.940579  439536 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 08:29:06.941682  439536 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 08:29:06.942676  439536 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:29:06.944718  439536 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:29:06.945072  439536 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:29:06.967271  439536 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:29:06.967366  439536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:29:07.303873  439536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-24 08:29:07.292936945 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:29:07.304002  439536 docker.go:319] overlay module found
	I1124 08:29:07.305612  439536 out.go:99] Using the docker driver based on user configuration
	I1124 08:29:07.305641  439536 start.go:309] selected driver: docker
	I1124 08:29:07.305651  439536 start.go:927] validating driver "docker" against <nil>
	I1124 08:29:07.305774  439536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:29:07.367833  439536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-24 08:29:07.358351032 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:29:07.368030  439536 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:29:07.368694  439536 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 08:29:07.368877  439536 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:29:07.370407  439536 out.go:171] Using Docker driver with root privileges
	I1124 08:29:07.371447  439536 cni.go:84] Creating CNI manager for ""
	I1124 08:29:07.371537  439536 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 08:29:07.371552  439536 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 08:29:07.371631  439536 start.go:353] cluster config:
	{Name:download-only-318255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-318255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:29:07.372768  439536 out.go:99] Starting "download-only-318255" primary control-plane node in "download-only-318255" cluster
	I1124 08:29:07.372785  439536 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 08:29:07.373722  439536 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 08:29:07.373768  439536 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 08:29:07.373859  439536 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 08:29:07.390997  439536 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 08:29:07.391176  439536 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 08:29:07.391262  439536 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 08:29:07.508673  439536 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1124 08:29:07.508726  439536 cache.go:65] Caching tarball of preloaded images
	I1124 08:29:07.509398  439536 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 08:29:07.511055  439536 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 08:29:07.511071  439536 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1124 08:29:07.623247  439536 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1124 08:29:07.623401  439536 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1124 08:29:20.238479  439536 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1124 08:29:20.238869  439536 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/download-only-318255/config.json ...
	I1124 08:29:20.238903  439536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/download-only-318255/config.json: {Name:mk413c8a116c4bbf6051e083d1ca80944619c6ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:20.239115  439536 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 08:29:20.239863  439536 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-318255 host does not exist
	  To start a cluster, run: "minikube start -p download-only-318255"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-318255
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (11.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-805439 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-805439 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.724771445s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (11.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1124 08:29:36.360032  439524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1124 08:29:36.360079  439524 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-805439
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-805439: exit status 85 (74.91523ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-318255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-318255 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-318255                                                                                                                                                               │ download-only-318255 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-805439 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-805439 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:29:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:29:24.688584  439936 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:29:24.688806  439936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:24.688815  439936 out.go:374] Setting ErrFile to fd 2...
	I1124 08:29:24.688819  439936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:24.688986  439936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:29:24.689435  439936 out.go:368] Setting JSON to true
	I1124 08:29:24.690310  439936 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11501,"bootTime":1763961464,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:29:24.690364  439936 start.go:143] virtualization: kvm guest
	I1124 08:29:24.692058  439936 out.go:99] [download-only-805439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:29:24.692227  439936 notify.go:221] Checking for updates...
	I1124 08:29:24.693283  439936 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:29:24.694569  439936 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:29:24.695731  439936 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 08:29:24.696719  439936 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 08:29:24.700947  439936 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:29:24.703009  439936 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:29:24.703250  439936 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:29:24.726013  439936 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:29:24.726097  439936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:29:24.788022  439936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-24 08:29:24.7781304 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:29:24.788134  439936 docker.go:319] overlay module found
	I1124 08:29:24.789437  439936 out.go:99] Using the docker driver based on user configuration
	I1124 08:29:24.789485  439936 start.go:309] selected driver: docker
	I1124 08:29:24.789498  439936 start.go:927] validating driver "docker" against <nil>
	I1124 08:29:24.789593  439936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:29:24.848807  439936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-24 08:29:24.839678835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:29:24.848986  439936 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:29:24.849588  439936 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 08:29:24.849782  439936 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:29:24.851341  439936 out.go:171] Using Docker driver with root privileges
	I1124 08:29:24.852349  439936 cni.go:84] Creating CNI manager for ""
	I1124 08:29:24.852429  439936 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 08:29:24.852441  439936 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 08:29:24.852524  439936 start.go:353] cluster config:
	{Name:download-only-805439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-805439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:29:24.853746  439936 out.go:99] Starting "download-only-805439" primary control-plane node in "download-only-805439" cluster
	I1124 08:29:24.853760  439936 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 08:29:24.854943  439936 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 08:29:24.854980  439936 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1124 08:29:24.855092  439936 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 08:29:24.871046  439936 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 08:29:24.871166  439936 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 08:29:24.871182  439936 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 08:29:24.871186  439936 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 08:29:24.871196  439936 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 08:29:25.266534  439936 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1124 08:29:25.266568  439936 cache.go:65] Caching tarball of preloaded images
	I1124 08:29:25.266747  439936 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1124 08:29:25.268516  439936 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1124 08:29:25.268537  439936 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1124 08:29:25.376118  439936 preload.go:295] Got checksum from GCS API "9dc714afc7e85c27d8bb9ef4a563e9e2"
	I1124 08:29:25.376170  439936 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:9dc714afc7e85c27d8bb9ef4a563e9e2 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-805439 host does not exist
	  To start a cluster, run: "minikube start -p download-only-805439"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-805439
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (13.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-932718 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-932718 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.905020345s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (13.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
I1124 08:29:50.896591  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 08:29:51.213951  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 08:29:51.522083  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-932718
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-932718: exit status 85 (75.613405ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-318255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-318255 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-318255                                                                                                                                                                      │ download-only-318255 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-805439 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-805439 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-805439                                                                                                                                                                      │ download-only-805439 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │ 24 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-932718 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-932718 │ jenkins │ v1.37.0 │ 24 Nov 25 08:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 08:29:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 08:29:36.872185  440329 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:29:36.872420  440329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:36.872428  440329 out.go:374] Setting ErrFile to fd 2...
	I1124 08:29:36.872432  440329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:29:36.872641  440329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:29:36.873098  440329 out.go:368] Setting JSON to true
	I1124 08:29:36.873938  440329 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11513,"bootTime":1763961464,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:29:36.873994  440329 start.go:143] virtualization: kvm guest
	I1124 08:29:36.875710  440329 out.go:99] [download-only-932718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:29:36.875904  440329 notify.go:221] Checking for updates...
	I1124 08:29:36.877113  440329 out.go:171] MINIKUBE_LOCATION=21978
	I1124 08:29:36.878535  440329 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:29:36.879884  440329 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 08:29:36.881113  440329 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 08:29:36.882239  440329 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 08:29:36.884490  440329 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 08:29:36.884731  440329 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:29:36.908097  440329 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:29:36.908186  440329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:29:36.967098  440329 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-24 08:29:36.956021977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:29:36.967218  440329 docker.go:319] overlay module found
	I1124 08:29:36.968812  440329 out.go:99] Using the docker driver based on user configuration
	I1124 08:29:36.968856  440329 start.go:309] selected driver: docker
	I1124 08:29:36.968865  440329 start.go:927] validating driver "docker" against <nil>
	I1124 08:29:36.968947  440329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:29:37.028137  440329 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-24 08:29:37.018911932 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:29:37.028346  440329 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 08:29:37.029043  440329 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 08:29:37.029236  440329 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 08:29:37.030972  440329 out.go:171] Using Docker driver with root privileges
	I1124 08:29:37.032106  440329 cni.go:84] Creating CNI manager for ""
	I1124 08:29:37.032176  440329 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 08:29:37.032189  440329 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 08:29:37.032273  440329 start.go:353] cluster config:
	{Name:download-only-932718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-932718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:29:37.033566  440329 out.go:99] Starting "download-only-932718" primary control-plane node in "download-only-932718" cluster
	I1124 08:29:37.033586  440329 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 08:29:37.034721  440329 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 08:29:37.034756  440329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 08:29:37.034855  440329 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 08:29:37.052260  440329 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 08:29:37.052388  440329 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 08:29:37.052407  440329 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 08:29:37.052417  440329 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 08:29:37.052431  440329 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	W1124 08:29:37.446017  440329 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	W1124 08:29:37.711131  440329 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I1124 08:29:37.711674  440329 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/download-only-932718/config.json ...
	I1124 08:29:37.711714  440329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/download-only-932718/config.json: {Name:mk17bdb559cb48d4c3c3fdf5301aa29ba96ab41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 08:29:37.712284  440329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1124 08:29:37.712285  440329 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1124 08:29:37.712947  440329 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	I1124 08:29:37.712986  440329 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1124 08:29:37.814813  440329 out.go:99] Another minikube instance is downloading dependencies... 
	
	
	* The control-plane node download-only-932718 host does not exist
	  To start a cluster, run: "minikube start -p download-only-932718"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-932718
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-045630 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-045630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-045630
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.88s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 08:29:53.116800  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-215883 --alsologtostderr --binary-mirror http://127.0.0.1:35981 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-215883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-215883
--- PASS: TestBinaryMirror (0.88s)

                                                
                                    
x
+
TestOffline (57.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-126100 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-126100 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (52.919035478s)
helpers_test.go:175: Cleaning up "offline-containerd-126100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-126100
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-126100: (4.485306269s)
--- PASS: TestOffline (57.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-598179
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-598179: exit status 85 (66.585879ms)

                                                
                                                
-- stdout --
	* Profile "addons-598179" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-598179"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-598179
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-598179: exit status 85 (65.714029ms)

                                                
                                                
-- stdout --
	* Profile "addons-598179" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-598179"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (132.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-598179 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-598179 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m12.380779405s)
--- PASS: TestAddons/Setup (132.38s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 19.795011ms
addons_test.go:876: volcano-admission stabilized in 19.836934ms
addons_test.go:868: volcano-scheduler stabilized in 19.887566ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-89ckt" [998288ad-9408-4e3b-bd32-6d1ec706d71a] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003225302s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-lh92j" [25823221-5ac3-4788-a9eb-dbd18bd32072] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003309116s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-q4v7t" [5f6f00e6-680d-4fa5-b771-94ca8c963a8d] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003984143s
addons_test.go:903: (dbg) Run:  kubectl --context addons-598179 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-598179 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-598179 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [e5c328cc-d744-467a-9546-bfa648e74145] Pending
helpers_test.go:352: "test-job-nginx-0" [e5c328cc-d744-467a-9546-bfa648e74145] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [e5c328cc-d744-467a-9546-bfa648e74145] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003947384s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-598179 addons disable volcano --alsologtostderr -v=1: (11.952255803s)
--- PASS: TestAddons/serial/Volcano (40.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-598179 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-598179 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-598179 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-598179 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [00349d1b-1562-48e8-b6ab-f97a78510166] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [00349d1b-1562-48e8-b6ab-f97a78510166] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004174681s
addons_test.go:694: (dbg) Run:  kubectl --context addons-598179 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-598179 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-598179 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.708698ms
I1124 08:33:05.297481  439524 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 08:33:05.297506  439524 kapi.go:107] duration metric: took 3.269556ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-dfspf" [ed4c44ee-5e6a-4c55-94d3-bd4a4db343ae] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003239359s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-mlc6d" [947e0caf-6263-4ded-8017-6c2151fae8db] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003921368s
addons_test.go:392: (dbg) Run:  kubectl --context addons-598179 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-598179 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-598179 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.712814508s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 ip
2025/11/24 08:33:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.543968ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-598179
addons_test.go:332: (dbg) Run:  kubectl --context addons-598179 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-598179 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-598179 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-598179 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [dabb83d8-181d-444e-90a2-4adcb0453fac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [dabb83d8-181d-444e-90a2-4adcb0453fac] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003564973s
I1124 08:33:31.262008  439524 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-598179 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-598179 addons disable ingress-dns --alsologtostderr -v=1: (1.06180858s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-598179 addons disable ingress --alsologtostderr -v=1: (7.864603718s)
--- PASS: TestAddons/parallel/Ingress (20.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-7wm44" [d9d7e055-8e83-4f9f-bfd1-e948c28e77ae] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004308886s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-598179 addons disable inspektor-gadget --alsologtostderr -v=1: (5.742429489s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.487215ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4cb48" [458be6cb-5ec0-426c-ad40-d93816a7c073] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003183296s
addons_test.go:463: (dbg) Run:  kubectl --context addons-598179 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 08:33:05.294287  439524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.279988ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-598179 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-598179 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [62ab77cd-c449-4530-bbbc-1233597d4a0d] Pending
helpers_test.go:352: "task-pv-pod" [62ab77cd-c449-4530-bbbc-1233597d4a0d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [62ab77cd-c449-4530-bbbc-1233597d4a0d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004218898s
addons_test.go:572: (dbg) Run:  kubectl --context addons-598179 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-598179 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-598179 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-598179 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-598179 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-598179 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-598179 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [e3c7e7b1-0e95-4d1d-9155-267f2068f12a] Pending
helpers_test.go:352: "task-pv-pod-restore" [e3c7e7b1-0e95-4d1d-9155-267f2068f12a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [e3c7e7b1-0e95-4d1d-9155-267f2068f12a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00339176s
addons_test.go:614: (dbg) Run:  kubectl --context addons-598179 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-598179 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-598179 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-598179 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.535748054s)
--- PASS: TestAddons/parallel/CSI (59.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-598179 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-9c8zv" [b0987f06-82a9-460e-86e0-3a50efa05284] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-9c8zv" [b0987f06-82a9-460e-86e0-3a50efa05284] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.0038868s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-598179 addons disable headlamp --alsologtostderr -v=1: (5.976425997s)
--- PASS: TestAddons/parallel/Headlamp (16.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-97rtf" [9a186e17-f45c-48b1-96ff-279ae4a58664] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003877821s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-598179 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-598179 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2062ac70-c29e-4d93-a6d8-ca73ababb053] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2062ac70-c29e-4d93-a6d8-ca73ababb053] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2062ac70-c29e-4d93-a6d8-ca73ababb053] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002923076s
addons_test.go:967: (dbg) Run:  kubectl --context addons-598179 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 ssh "cat /opt/local-path-provisioner/pvc-1d18686e-7af6-4a39-8886-39939e450721_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-598179 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-598179 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5jf8t" [60d130d0-d027-4664-b248-9cbf4ca28736] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003799677s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-l4pps" [8113373d-478b-4cbe-b781-87b9317a220d] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004085391s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-598179 addons disable yakd --alsologtostderr -v=1: (5.707867929s)
--- PASS: TestAddons/parallel/Yakd (10.71s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-rnd72" [0a9d2cb0-38c4-4847-9b3a-6df07a9e6fd9] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003403453s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-598179 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-598179
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-598179: (12.323026839s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-598179
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-598179
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-598179
--- PASS: TestAddons/StoppedEnableDisable (12.62s)

                                                
                                    
x
+
TestCertOptions (29.67s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-780864 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-780864 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (26.486302839s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-780864 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-780864 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-780864 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-780864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-780864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-780864: (2.491657668s)
--- PASS: TestCertOptions (29.67s)

                                                
                                    
x
+
TestCertExpiration (218.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-869306 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-869306 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (29.077660309s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-869306 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-869306 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.334211118s)
helpers_test.go:175: Cleaning up "cert-expiration-869306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-869306
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-869306: (2.48552915s)
--- PASS: TestCertExpiration (218.90s)

                                                
                                    
x
+
TestForceSystemdFlag (35.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-291004 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-291004 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.745250352s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-291004 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-291004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-291004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-291004: (3.627529314s)
--- PASS: TestForceSystemdFlag (35.72s)

                                                
                                    
x
+
TestForceSystemdEnv (40.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-156110 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-156110 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.470726305s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-156110 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-156110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-156110
E1124 09:02:06.443118  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-156110: (5.870894505s)
--- PASS: TestForceSystemdEnv (40.69s)

                                                
                                    
x
+
TestDockerEnvContainerd (40.4s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-730297 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-730297 --driver=docker  --container-runtime=containerd: (24.018041751s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-730297"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-730297": (1.027433573s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXIvUuGC/agent.463727" SSH_AGENT_PID="463728" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXIvUuGC/agent.463727" SSH_AGENT_PID="463728" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXIvUuGC/agent.463727" SSH_AGENT_PID="463728" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.092938012s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXIvUuGC/agent.463727" SSH_AGENT_PID="463728" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-730297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-730297
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-730297: (2.338440492s)
--- PASS: TestDockerEnvContainerd (40.40s)

                                                
                                    
x
+
TestErrorSpam/setup (25.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-337768 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-337768 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-337768 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-337768 --driver=docker  --container-runtime=containerd: (25.223605614s)
--- PASS: TestErrorSpam/setup (25.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 stop: (1.88466604s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-337768 --log_dir /tmp/nospam-337768 stop
--- PASS: TestErrorSpam/stop (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/test/nested/copy/439524/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-850845 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-850845 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (39.573732015s)
--- PASS: TestFunctional/serial/StartWithProxy (39.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (8.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 08:36:20.405525  439524 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-850845 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-850845 --alsologtostderr -v=8: (8.071432995s)
functional_test.go:678: soft start took 8.073186527s for "functional-850845" cluster.
I1124 08:36:28.477543  439524 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (8.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-850845 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-850845 /tmp/TestFunctionalserialCacheCmdcacheadd_local2512346907/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cache add minikube-local-cache-test:functional-850845
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-850845 cache add minikube-local-cache-test:functional-850845: (1.671409716s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cache delete minikube-local-cache-test:functional-850845
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-850845
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.123073ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 kubectl -- --context functional-850845 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-850845 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-850845 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 08:37:06.444917  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:06.451298  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:06.462719  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:06.484107  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:06.525522  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:06.607002  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:06.768572  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:07.090272  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:07.732350  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:09.014332  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:11.575975  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:37:16.697657  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-850845 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.261695747s)
functional_test.go:776: restart took 44.261899094s for "functional-850845" cluster.
I1124 08:37:19.638009  439524 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (44.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-850845 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-850845 logs: (1.211339552s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 logs --file /tmp/TestFunctionalserialLogsFileCmd2956537891/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-850845 logs --file /tmp/TestFunctionalserialLogsFileCmd2956537891/001/logs.txt: (1.211844069s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-850845 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-850845
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-850845: exit status 115 (372.851028ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31447 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-850845 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 config get cpus: exit status 14 (114.603096ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 config get cpus: exit status 14 (77.812555ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-850845 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-850845 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 486561: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-850845 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-850845 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (184.476877ms)

                                                
                                                
-- stdout --
	* [functional-850845] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:37:51.970405  485524 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:37:51.970727  485524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:51.970741  485524 out.go:374] Setting ErrFile to fd 2...
	I1124 08:37:51.970747  485524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:51.971030  485524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:37:51.971494  485524 out.go:368] Setting JSON to false
	I1124 08:37:51.972820  485524 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12008,"bootTime":1763961464,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:37:51.972882  485524 start.go:143] virtualization: kvm guest
	I1124 08:37:51.974675  485524 out.go:179] * [functional-850845] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:37:51.975880  485524 notify.go:221] Checking for updates...
	I1124 08:37:51.975897  485524 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:37:51.977954  485524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:37:51.979148  485524 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 08:37:51.980278  485524 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 08:37:51.981305  485524 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:37:51.982338  485524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:37:51.986431  485524 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 08:37:51.987275  485524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:37:52.017627  485524 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:37:52.017746  485524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:37:52.078560  485524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:37:52.068918539 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:37:52.078668  485524 docker.go:319] overlay module found
	I1124 08:37:52.080144  485524 out.go:179] * Using the docker driver based on existing profile
	I1124 08:37:52.081336  485524 start.go:309] selected driver: docker
	I1124 08:37:52.081355  485524 start.go:927] validating driver "docker" against &{Name:functional-850845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-850845 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:37:52.081512  485524 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:37:52.083121  485524 out.go:203] 
	W1124 08:37:52.084051  485524 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 08:37:52.084880  485524 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-850845 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-850845 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-850845 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (166.400005ms)

                                                
                                                
-- stdout --
	* [functional-850845] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:37:48.927302  484033 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:37:48.927399  484033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:48.927406  484033 out.go:374] Setting ErrFile to fd 2...
	I1124 08:37:48.927410  484033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:37:48.927731  484033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:37:48.928172  484033 out.go:368] Setting JSON to false
	I1124 08:37:48.929309  484033 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12005,"bootTime":1763961464,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:37:48.929373  484033 start.go:143] virtualization: kvm guest
	I1124 08:37:48.931228  484033 out.go:179] * [functional-850845] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:37:48.932301  484033 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:37:48.932308  484033 notify.go:221] Checking for updates...
	I1124 08:37:48.934383  484033 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:37:48.935499  484033 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 08:37:48.936754  484033 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 08:37:48.937759  484033 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:37:48.938726  484033 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:37:48.940099  484033 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 08:37:48.940664  484033 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:37:48.964454  484033 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:37:48.964578  484033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:37:49.020637  484033 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:37:49.010681897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:37:49.020753  484033 docker.go:319] overlay module found
	I1124 08:37:49.022325  484033 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 08:37:49.023324  484033 start.go:309] selected driver: docker
	I1124 08:37:49.023336  484033 start.go:927] validating driver "docker" against &{Name:functional-850845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-850845 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:37:49.023440  484033 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:37:49.024953  484033 out.go:203] 
	W1124 08:37:49.025928  484033 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:37:49.026886  484033 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-850845 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-850845 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-g99m9" [68a7f974-55eb-4f8c-8674-c096fbf36995] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-g99m9" [68a7f974-55eb-4f8c-8674-c096fbf36995] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.003796232s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31549
functional_test.go:1680: http://192.168.49.2:31549: success! body:
Request served by hello-node-connect-7d85dfc575-g99m9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31549
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fea4a03d-371e-4552-a9e2-e4c0761853ba] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.014734592s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-850845 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-850845 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-850845 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-850845 apply -f testdata/storage-provisioner/pod.yaml
I1124 08:37:38.736439  439524 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ac261782-8f2b-43ee-a087-4a3b8c4c68cc] Pending
helpers_test.go:352: "sp-pod" [ac261782-8f2b-43ee-a087-4a3b8c4c68cc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ac261782-8f2b-43ee-a087-4a3b8c4c68cc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004101702s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-850845 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-850845 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-850845 delete -f testdata/storage-provisioner/pod.yaml: (1.127301814s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-850845 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0ef68a6a-03e3-40ec-ad9e-5dcb9770aab9] Pending
helpers_test.go:352: "sp-pod" [0ef68a6a-03e3-40ec-ad9e-5dcb9770aab9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0ef68a6a-03e3-40ec-ad9e-5dcb9770aab9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004777951s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-850845 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh -n functional-850845 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cp functional-850845:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3887317010/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh -n functional-850845 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh -n functional-850845 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-850845 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-t8676" [639dfa2e-a16b-40af-ad1c-e36ab96b6057] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-t8676" [639dfa2e-a16b-40af-ad1c-e36ab96b6057] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.00385274s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-850845 exec mysql-5bb876957f-t8676 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-850845 exec mysql-5bb876957f-t8676 -- mysql -ppassword -e "show databases;": exit status 1 (151.465879ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:37:45.558177  439524 retry.go:31] will retry after 1.407906799s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-850845 exec mysql-5bb876957f-t8676 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-850845 exec mysql-5bb876957f-t8676 -- mysql -ppassword -e "show databases;": exit status 1 (130.228138ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:37:47.097474  439524 retry.go:31] will retry after 1.128727072s: exit status 1
E1124 08:37:47.420727  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-850845 exec mysql-5bb876957f-t8676 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-850845 exec mysql-5bb876957f-t8676 -- mysql -ppassword -e "show databases;": exit status 1 (112.617999ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:37:48.339599  439524 retry.go:31] will retry after 2.755063104s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-850845 exec mysql-5bb876957f-t8676 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/439524/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo cat /etc/test/nested/copy/439524/hosts"
E1124 08:37:26.939023  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/439524.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo cat /etc/ssl/certs/439524.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/439524.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo cat /usr/share/ca-certificates/439524.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4395242.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo cat /etc/ssl/certs/4395242.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4395242.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo cat /usr/share/ca-certificates/4395242.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-850845 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 ssh "sudo systemctl is-active docker": exit status 1 (368.772242ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 ssh "sudo systemctl is-active crio": exit status 1 (323.426561ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-850845 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-850845 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-850845 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-850845 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 479480: os: process already finished
helpers_test.go:525: unable to kill pid 479163: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-850845 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-850845
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-850845
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-850845 image ls --format short --alsologtostderr:
I1124 08:37:53.640072  486607 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:53.640358  486607 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:53.640369  486607 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:53.640372  486607 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:53.640644  486607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:37:53.641247  486607 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:53.641361  486607 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:53.641866  486607 cli_runner.go:164] Run: docker container inspect functional-850845 --format={{.State.Status}}
I1124 08:37:53.660352  486607 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:53.660398  486607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850845
I1124 08:37:53.678494  486607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-850845/id_rsa Username:docker}
I1124 08:37:53.780142  486607 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-850845 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ localhost/my-image                          │ functional-850845  │ sha256:bf438c │ 775kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:8aa150 │ 26MB   │
│ docker.io/kicbase/echo-server               │ functional-850845  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:a5f569 │ 27.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:01e8ba │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:88320b │ 17.4MB │
│ docker.io/library/minikube-local-cache-test │ functional-850845  │ sha256:50d887 │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-850845 image ls --format table --alsologtostderr:
I1124 08:37:58.354817  488166 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:58.355143  488166 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:58.355156  488166 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:58.355164  488166 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:58.355635  488166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:37:58.356382  488166 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:58.356551  488166 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:58.357127  488166 cli_runner.go:164] Run: docker container inspect functional-850845 --format={{.State.Status}}
I1124 08:37:58.379118  488166 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:58.379183  488166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850845
I1124 08:37:58.401088  488166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-850845/id_rsa Username:docker}
I1124 08:37:58.508856  488166 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-850845 image ls --format json --alsologtostderr:
[{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-850845","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0ee
d193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"27060130"},{"id":"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"25963482"},{"id":"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"17382272"},{"id":"sha256:50d8871f80b0198d84a0a0d99f07d43750bff3a9cec1e537b1400d4146182233","repoDigests":[],"r
epoTags":["docker.io/library/minikube-local-cache-test:functional-850845"],"size":"991"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:bf438cafec264a727f9c0efc5f88fb0432e7cb3616693b03c3edc5bcd232b524","repoDigests":[],"repoTags":["localhost/my-image:functional-850845"
],"size":"774888"},{"id":"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"22818657"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256
:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-850845 image ls --format json --alsologtostderr:
I1124 08:37:58.073635  488006 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:58.073943  488006 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:58.073957  488006 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:58.073964  488006 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:58.074368  488006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:37:58.075080  488006 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:58.075190  488006 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:58.075843  488006 cli_runner.go:164] Run: docker container inspect functional-850845 --format={{.State.Status}}
I1124 08:37:58.096932  488006 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:58.096986  488006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850845
I1124 08:37:58.119305  488006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-850845/id_rsa Username:docker}
I1124 08:37:58.234979  488006 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-850845 image ls --format yaml --alsologtostderr:
- id: sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "17382272"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:50d8871f80b0198d84a0a0d99f07d43750bff3a9cec1e537b1400d4146182233
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-850845
size: "991"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "22818657"
- id: sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "25963482"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-850845
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "27060130"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-850845 image ls --format yaml --alsologtostderr:
I1124 08:37:53.884739  486718 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:53.884970  486718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:53.884979  486718 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:53.884983  486718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:53.885174  486718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:37:53.885781  486718 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:53.885965  486718 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:53.886680  486718 cli_runner.go:164] Run: docker container inspect functional-850845 --format={{.State.Status}}
I1124 08:37:53.906082  486718 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:53.906128  486718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850845
I1124 08:37:53.925128  486718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-850845/id_rsa Username:docker}
I1124 08:37:54.028841  486718 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh pgrep buildkitd
I1124 08:37:54.122162  439524 detect.go:223] nested VM detected
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 ssh pgrep buildkitd: exit status 1 (307.038359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image build -t localhost/my-image:functional-850845 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-850845 image build -t localhost/my-image:functional-850845 testdata/build --alsologtostderr: (3.344015623s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-850845 image build -t localhost/my-image:functional-850845 testdata/build --alsologtostderr:
I1124 08:37:54.430670  486978 out.go:360] Setting OutFile to fd 1 ...
I1124 08:37:54.431421  486978 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:54.431431  486978 out.go:374] Setting ErrFile to fd 2...
I1124 08:37:54.431436  486978 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:37:54.431674  486978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:37:54.432222  486978 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:54.432926  486978 config.go:182] Loaded profile config "functional-850845": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1124 08:37:54.433380  486978 cli_runner.go:164] Run: docker container inspect functional-850845 --format={{.State.Status}}
I1124 08:37:54.453444  486978 ssh_runner.go:195] Run: systemctl --version
I1124 08:37:54.453519  486978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850845
I1124 08:37:54.473601  486978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-850845/id_rsa Username:docker}
I1124 08:37:54.575707  486978 build_images.go:162] Building image from path: /tmp/build.4157778974.tar
I1124 08:37:54.575791  486978 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 08:37:54.584208  486978 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4157778974.tar
I1124 08:37:54.588010  486978 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4157778974.tar: stat -c "%s %y" /var/lib/minikube/build/build.4157778974.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4157778974.tar': No such file or directory
I1124 08:37:54.588036  486978 ssh_runner.go:362] scp /tmp/build.4157778974.tar --> /var/lib/minikube/build/build.4157778974.tar (3072 bytes)
I1124 08:37:54.606012  486978 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4157778974
I1124 08:37:54.613619  486978 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4157778974 -xf /var/lib/minikube/build/build.4157778974.tar
I1124 08:37:54.621427  486978 containerd.go:394] Building image: /var/lib/minikube/build/build.4157778974
I1124 08:37:54.621506  486978 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4157778974 --local dockerfile=/var/lib/minikube/build/build.4157778974 --output type=image,name=localhost/my-image:functional-850845
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:6f89cb74cfdd4ea0b8d9caa11ca688299d907aca616cd8ccb0a9b10f79b83e87 done
#8 exporting config sha256:bf438cafec264a727f9c0efc5f88fb0432e7cb3616693b03c3edc5bcd232b524 done
#8 naming to localhost/my-image:functional-850845 done
#8 DONE 0.1s
I1124 08:37:57.690028  486978 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4157778974 --local dockerfile=/var/lib/minikube/build/build.4157778974 --output type=image,name=localhost/my-image:functional-850845: (3.068486255s)
I1124 08:37:57.690129  486978 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4157778974
I1124 08:37:57.701081  486978 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4157778974.tar
I1124 08:37:57.710613  486978 build_images.go:218] Built localhost/my-image:functional-850845 from /tmp/build.4157778974.tar
I1124 08:37:57.710688  486978 build_images.go:134] succeeded building to: functional-850845
I1124 08:37:57.710698  486978 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.91286152s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-850845
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-850845 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-850845 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [53a3f7d0-e8b5-4804-9612-df713bf59085] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [53a3f7d0-e8b5-4804-9612-df713bf59085] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003609668s
I1124 08:37:38.031133  439524 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image load --daemon kicbase/echo-server:functional-850845 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image load --daemon kicbase/echo-server:functional-850845 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-850845
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image load --daemon kicbase/echo-server:functional-850845 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "335.586254ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.587039ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "365.824762ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.138798ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image save kicbase/echo-server:functional-850845 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image rm kicbase/echo-server:functional-850845 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-850845
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 image save --daemon kicbase/echo-server:functional-850845 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-850845
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-850845 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.76.26 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-850845 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-850845 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-850845 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-rwcbz" [4cc08fc4-5948-45ca-8a41-019d5a2b2d79] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-rwcbz" [4cc08fc4-5948-45ca-8a41-019d5a2b2d79] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00350166s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdany-port2387019361/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763973469033537963" to /tmp/TestFunctionalparallelMountCmdany-port2387019361/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763973469033537963" to /tmp/TestFunctionalparallelMountCmdany-port2387019361/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763973469033537963" to /tmp/TestFunctionalparallelMountCmdany-port2387019361/001/test-1763973469033537963
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.689603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:37:49.327625  439524 retry.go:31] will retry after 338.899668ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 08:37 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 08:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 08:37 test-1763973469033537963
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh cat /mount-9p/test-1763973469033537963
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-850845 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9a52cc61-89f5-431a-83f3-c0b899b6d615] Pending
helpers_test.go:352: "busybox-mount" [9a52cc61-89f5-431a-83f3-c0b899b6d615] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9a52cc61-89f5-431a-83f3-c0b899b6d615] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9a52cc61-89f5-431a-83f3-c0b899b6d615] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.002820195s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-850845 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdany-port2387019361/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 service list -o json
functional_test.go:1504: Took "916.967701ms" to run "out/minikube-linux-amd64 -p functional-850845 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31537
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31537
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdspecific-port960854229/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (341.747958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:37:57.114642  439524 retry.go:31] will retry after 615.775664ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdspecific-port960854229/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 ssh "sudo umount -f /mount-9p": exit status 1 (309.678836ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-850845 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdspecific-port960854229/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2360328072/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2360328072/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2360328072/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T" /mount1: exit status 1 (363.221007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:37:59.301662  439524 retry.go:31] will retry after 566.490626ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-850845 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-850845 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2360328072/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2360328072/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-850845 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2360328072/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
2025/11/24 08:38:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-850845
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-850845
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-850845
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21978-435860/.minikube/files/etc/test/nested/copy/439524/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (46.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-749436 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1124 08:38:28.383225  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-749436 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (46.906432587s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (46.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (7.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1124 08:38:57.731936  439524 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-749436 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-749436 --alsologtostderr -v=8: (7.229679245s)
functional_test.go:678: soft start took 7.230067013s for "functional-749436" cluster.
I1124 08:39:04.962105  439524 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (7.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-749436 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach4016829326/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cache add minikube-local-cache-test:functional-749436
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-749436 cache add minikube-local-cache-test:functional-749436: (1.789099532s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cache delete minikube-local-cache-test:functional-749436
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-749436
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.351896ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 kubectl -- --context functional-749436 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-749436 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (35.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-749436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-749436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.759206168s)
functional_test.go:776: restart took 35.759356086s for "functional-749436" cluster.
I1124 08:39:47.828051  439524 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (35.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-749436 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-749436 logs: (1.222843111s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3364984297/001/logs.txt
E1124 08:39:50.304776  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-749436 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3364984297/001/logs.txt: (1.228457815s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-749436 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-749436
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-749436: exit status 115 (350.491068ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31030 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-749436 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 config get cpus: exit status 14 (107.861836ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 config get cpus: exit status 14 (89.702943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (14.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-749436 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-749436 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 506555: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (14.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-749436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-749436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (194.173714ms)

                                                
                                                
-- stdout --
	* [functional-749436] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:40:18.006394  505690 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:40:18.006520  505690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:40:18.006529  505690 out.go:374] Setting ErrFile to fd 2...
	I1124 08:40:18.006533  505690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:40:18.006745  505690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:40:18.007140  505690 out.go:368] Setting JSON to false
	I1124 08:40:18.008248  505690 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12154,"bootTime":1763961464,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:40:18.008306  505690 start.go:143] virtualization: kvm guest
	I1124 08:40:18.010148  505690 out.go:179] * [functional-749436] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 08:40:18.011386  505690 notify.go:221] Checking for updates...
	I1124 08:40:18.011403  505690 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:40:18.012679  505690 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:40:18.013914  505690 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 08:40:18.015040  505690 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 08:40:18.016069  505690 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:40:18.020670  505690 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:40:18.022729  505690 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 08:40:18.023573  505690 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:40:18.053528  505690 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:40:18.053711  505690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:40:18.121413  505690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:40:18.109246653 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:40:18.121574  505690 docker.go:319] overlay module found
	I1124 08:40:18.124575  505690 out.go:179] * Using the docker driver based on existing profile
	I1124 08:40:18.125837  505690 start.go:309] selected driver: docker
	I1124 08:40:18.125863  505690 start.go:927] validating driver "docker" against &{Name:functional-749436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-749436 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:40:18.125982  505690 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:40:18.127828  505690 out.go:203] 
	W1124 08:40:18.128853  505690 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 08:40:18.129928  505690 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-749436 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-749436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-749436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (171.390507ms)

                                                
                                                
-- stdout --
	* [functional-749436] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:40:18.441961  506072 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:40:18.442100  506072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:40:18.442110  506072 out.go:374] Setting ErrFile to fd 2...
	I1124 08:40:18.442117  506072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:40:18.442438  506072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:40:18.442914  506072 out.go:368] Setting JSON to false
	I1124 08:40:18.443963  506072 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12154,"bootTime":1763961464,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 08:40:18.444031  506072 start.go:143] virtualization: kvm guest
	I1124 08:40:18.445693  506072 out.go:179] * [functional-749436] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 08:40:18.446817  506072 notify.go:221] Checking for updates...
	I1124 08:40:18.446834  506072 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 08:40:18.447961  506072 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 08:40:18.448931  506072 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 08:40:18.450036  506072 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 08:40:18.450968  506072 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 08:40:18.451972  506072 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 08:40:18.453330  506072 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 08:40:18.453925  506072 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 08:40:18.478133  506072 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 08:40:18.478326  506072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:40:18.535259  506072 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 08:40:18.52536918 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:40:18.535415  506072 docker.go:319] overlay module found
	I1124 08:40:18.536991  506072 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 08:40:18.537937  506072 start.go:309] selected driver: docker
	I1124 08:40:18.537950  506072 start.go:927] validating driver "docker" against &{Name:functional-749436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-749436 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 08:40:18.538039  506072 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 08:40:18.539544  506072 out.go:203] 
	W1124 08:40:18.540447  506072 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 08:40:18.541297  506072 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-749436 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-749436 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-9zxjm" [a17c759b-ee13-49e9-b13a-c34f73a38bdb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-9zxjm" [a17c759b-ee13-49e9-b13a-c34f73a38bdb] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003405526s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32573
functional_test.go:1680: http://192.168.49.2:32573: success! body:
Request served by hello-node-connect-9f67c86d4-9zxjm

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32573
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (32.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [22f7c63b-457f-4a16-873a-4979fa90ef61] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003564148s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-749436 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-749436 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-749436 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-749436 apply -f testdata/storage-provisioner/pod.yaml
I1124 08:40:09.089251  439524 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a3ab5a76-a4d7-4c7d-aa67-ceb4ba3c7ba1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a3ab5a76-a4d7-4c7d-aa67-ceb4ba3c7ba1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004294896s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-749436 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-749436 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-749436 apply -f testdata/storage-provisioner/pod.yaml
I1124 08:40:22.183807  439524 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [955383b8-2cff-401a-8184-4853aeed1439] Pending
helpers_test.go:352: "sp-pod" [955383b8-2cff-401a-8184-4853aeed1439] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [955383b8-2cff-401a-8184-4853aeed1439] Running
2025/11/24 08:40:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004005412s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-749436 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (32.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh -n functional-749436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cp functional-749436:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3244915605/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh -n functional-749436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh -n functional-749436 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (18.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-749436 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-cjzt8" [7139886f-e6b3-49fc-8a92-cbc1de1c681a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-cjzt8" [7139886f-e6b3-49fc-8a92-cbc1de1c681a] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 16.003363522s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-749436 exec mysql-844cf969f6-cjzt8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-749436 exec mysql-844cf969f6-cjzt8 -- mysql -ppassword -e "show databases;": exit status 1 (130.829266ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:40:11.388984  439524 retry.go:31] will retry after 1.398518438s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-749436 exec mysql-844cf969f6-cjzt8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-749436 exec mysql-844cf969f6-cjzt8 -- mysql -ppassword -e "show databases;": exit status 1 (144.336035ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 08:40:12.933075  439524 retry.go:31] will retry after 867.364357ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-749436 exec mysql-844cf969f6-cjzt8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (18.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/439524/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo cat /etc/test/nested/copy/439524/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/439524.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo cat /etc/ssl/certs/439524.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/439524.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo cat /usr/share/ca-certificates/439524.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4395242.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo cat /etc/ssl/certs/4395242.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4395242.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo cat /usr/share/ca-certificates/4395242.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-749436 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 ssh "sudo systemctl is-active docker": exit status 1 (326.628553ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 ssh "sudo systemctl is-active crio": exit status 1 (329.217233ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-749436 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/etcd:3.5.24-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-749436
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-749436
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-749436 image ls --format short --alsologtostderr:
I1124 08:40:23.931508  507585 out.go:360] Setting OutFile to fd 1 ...
I1124 08:40:23.931841  507585 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:23.931855  507585 out.go:374] Setting ErrFile to fd 2...
I1124 08:40:23.931861  507585 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:23.932190  507585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:40:23.933032  507585 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:23.933191  507585 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:23.933764  507585 cli_runner.go:164] Run: docker container inspect functional-749436 --format={{.State.Status}}
I1124 08:40:23.954752  507585 ssh_runner.go:195] Run: systemctl --version
I1124 08:40:23.954809  507585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749436
I1124 08:40:23.976024  507585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-749436/id_rsa Username:docker}
I1124 08:40:24.088887  507585 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-749436 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-749436  │ sha256:0a10d7 │ 775kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:45f3cc │ 23.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:8a4ded │ 25.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:7bb621 │ 17.2MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-749436  │ sha256:50d887 │ 991B   │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:aa5e3e │ 23.6MB │
│ registry.k8s.io/etcd                        │ 3.5.24-0           │ sha256:8cb12d │ 23.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 318kB  │
│ docker.io/kicbase/echo-server               │ functional-749436  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:aa9d02 │ 27.7MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-749436 image ls --format table --alsologtostderr:
I1124 08:40:28.710343  509592 out.go:360] Setting OutFile to fd 1 ...
I1124 08:40:28.710505  509592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:28.710517  509592 out.go:374] Setting ErrFile to fd 2...
I1124 08:40:28.710524  509592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:28.710803  509592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:40:28.711498  509592 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:28.711629  509592 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:28.712117  509592 cli_runner.go:164] Run: docker container inspect functional-749436 --format={{.State.Status}}
I1124 08:40:28.734878  509592 ssh_runner.go:195] Run: systemctl --version
I1124 08:40:28.734946  509592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749436
I1124 08:40:28.754980  509592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-749436/id_rsa Username:docker}
I1124 08:40:28.864056  509592 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-749436 image ls --format json --alsologtostderr:
[{"id":"sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"17226414"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"597
72801"},{"id":"sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"23119069"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"317967"},{"id":"sha256:50d8871f80b0198d84a0a0d99f07d43750bff3a9cec1e537b1400d4146182233","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-749436"],"size":"991"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9057171"},{"id":"sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"25785436"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac
38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-749436","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.24-0"],"size":"23713864"},{"id":"sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"27669846"},{"id"
:"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:0a10d71e96487a5ea44bf3f5f938b1987fdf138e0ddd3440e68245e4cb37742e","repoDigests":[],"repoTags":["localhost/my-image:functional-749436"],"size":"774889"},{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23550419"},{"id
":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-749436 image ls --format json --alsologtostderr:
I1124 08:40:28.435568  509444 out.go:360] Setting OutFile to fd 1 ...
I1124 08:40:28.435675  509444 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:28.435685  509444 out.go:374] Setting ErrFile to fd 2...
I1124 08:40:28.435692  509444 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:28.436023  509444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:40:28.436732  509444 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:28.436853  509444 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:28.437391  509444 cli_runner.go:164] Run: docker container inspect functional-749436 --format={{.State.Status}}
I1124 08:40:28.459839  509444 ssh_runner.go:195] Run: systemctl --version
I1124 08:40:28.459912  509444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749436
I1124 08:40:28.483685  509444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-749436/id_rsa Username:docker}
I1124 08:40:28.597183  509444 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-749436 image ls --format yaml --alsologtostderr:
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9057171"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "317967"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "23119069"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-749436
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.24-0
size: "23713864"
- id: sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "27669846"
- id: sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "25785436"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:50d8871f80b0198d84a0a0d99f07d43750bff3a9cec1e537b1400d4146182233
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-749436
size: "991"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23550419"
- id: sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "17226414"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-749436 image ls --format yaml --alsologtostderr:
I1124 08:40:24.205185  507690 out.go:360] Setting OutFile to fd 1 ...
I1124 08:40:24.205313  507690 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:24.205324  507690 out.go:374] Setting ErrFile to fd 2...
I1124 08:40:24.205332  507690 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:24.205638  507690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:40:24.206485  507690 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:24.206676  507690 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:24.207614  507690 cli_runner.go:164] Run: docker container inspect functional-749436 --format={{.State.Status}}
I1124 08:40:24.228265  507690 ssh_runner.go:195] Run: systemctl --version
I1124 08:40:24.228321  507690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749436
I1124 08:40:24.247309  507690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-749436/id_rsa Username:docker}
I1124 08:40:24.350290  507690 ssh_runner.go:195] Run: sudo crictl images --output json
W1124 08:40:24.376997  507690 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 78b35a16-ccb4-4bce-bf72-13334e455909
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 ssh pgrep buildkitd: exit status 1 (298.597232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image build -t localhost/my-image:functional-749436 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-749436 image build -t localhost/my-image:functional-749436 testdata/build --alsologtostderr: (3.409382727s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-749436 image build -t localhost/my-image:functional-749436 testdata/build --alsologtostderr:
I1124 08:40:24.743210  508044 out.go:360] Setting OutFile to fd 1 ...
I1124 08:40:24.743336  508044 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:24.743347  508044 out.go:374] Setting ErrFile to fd 2...
I1124 08:40:24.743354  508044 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 08:40:24.743590  508044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
I1124 08:40:24.744144  508044 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:24.744834  508044 config.go:182] Loaded profile config "functional-749436": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1124 08:40:24.745275  508044 cli_runner.go:164] Run: docker container inspect functional-749436 --format={{.State.Status}}
I1124 08:40:24.768178  508044 ssh_runner.go:195] Run: systemctl --version
I1124 08:40:24.768241  508044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749436
I1124 08:40:24.793633  508044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/functional-749436/id_rsa Username:docker}
I1124 08:40:24.910546  508044 build_images.go:162] Building image from path: /tmp/build.1344582653.tar
I1124 08:40:24.910628  508044 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 08:40:24.919160  508044 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1344582653.tar
I1124 08:40:24.923924  508044 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1344582653.tar: stat -c "%s %y" /var/lib/minikube/build/build.1344582653.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1344582653.tar': No such file or directory
I1124 08:40:24.923967  508044 ssh_runner.go:362] scp /tmp/build.1344582653.tar --> /var/lib/minikube/build/build.1344582653.tar (3072 bytes)
I1124 08:40:24.942915  508044 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1344582653
I1124 08:40:24.951043  508044 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1344582653 -xf /var/lib/minikube/build/build.1344582653.tar
I1124 08:40:24.959139  508044 containerd.go:394] Building image: /var/lib/minikube/build/build.1344582653
I1124 08:40:24.959215  508044 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1344582653 --local dockerfile=/var/lib/minikube/build/build.1344582653 --output type=image,name=localhost/my-image:functional-749436
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:86f00697ee19638f8cbcd8c31897c8b08f2065e16115359e1246bf2fb9aa403d done
#8 exporting config sha256:0a10d71e96487a5ea44bf3f5f938b1987fdf138e0ddd3440e68245e4cb37742e done
#8 naming to localhost/my-image:functional-749436 done
#8 DONE 0.1s
I1124 08:40:28.060176  508044 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1344582653 --local dockerfile=/var/lib/minikube/build/build.1344582653 --output type=image,name=localhost/my-image:functional-749436: (3.100924455s)
I1124 08:40:28.060256  508044 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1344582653
I1124 08:40:28.070680  508044 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1344582653.tar
I1124 08:40:28.079899  508044 build_images.go:218] Built localhost/my-image:functional-749436 from /tmp/build.1344582653.tar
I1124 08:40:28.079938  508044 build_images.go:134] succeeded building to: functional-749436
I1124 08:40:28.079944  508044 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-749436
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (16.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-749436 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-749436 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-5ccpj" [fc336598-bdf9-446e-9213-c6edc8755071] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-5ccpj" [fc336598-bdf9-446e-9213-c6edc8755071] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.003588455s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (16.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image load --daemon kicbase/echo-server:functional-749436 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-749436 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-749436 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-749436 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-749436 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 502125: os: process already finished
helpers_test.go:525: unable to kill pid 501931: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-749436 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (19.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-749436 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [5175fb3f-6c70-4eed-8b14-5d1ae39bb8c9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [5175fb3f-6c70-4eed-8b14-5d1ae39bb8c9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.004846596s
I1124 08:40:17.037146  439524 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (19.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image load --daemon kicbase/echo-server:functional-749436 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-749436
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image load --daemon kicbase/echo-server:functional-749436 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image save kicbase/echo-server:functional-749436 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image rm kicbase/echo-server:functional-749436 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-749436
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 image save --daemon kicbase/echo-server:functional-749436 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-749436
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 service list -o json
functional_test.go:1504: Took "555.636932ms" to run "out/minikube-linux-amd64 -p functional-749436 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31982
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31982
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-749436 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "378.863596ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.471989ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.185.68 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-749436 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1226107714/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763973617207718817" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1226107714/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763973617207718817" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1226107714/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763973617207718817" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1226107714/001/test-1763973617207718817
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.283467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:40:17.516306  439524 retry.go:31] will retry after 424.281811ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 08:40 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 08:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 08:40 test-1763973617207718817
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh cat /mount-9p/test-1763973617207718817
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-749436 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [10cfc1fb-7eeb-4087-a091-2147157a7b27] Pending
helpers_test.go:352: "busybox-mount" [10cfc1fb-7eeb-4087-a091-2147157a7b27] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [10cfc1fb-7eeb-4087-a091-2147157a7b27] Running
helpers_test.go:352: "busybox-mount" [10cfc1fb-7eeb-4087-a091-2147157a7b27] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [10cfc1fb-7eeb-4087-a091-2147157a7b27] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00368093s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-749436 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1226107714/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "348.173575ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.768224ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3225629068/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.673211ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:40:25.413710  439524 retry.go:31] will retry after 400.352449ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3225629068/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 ssh "sudo umount -f /mount-9p": exit status 1 (334.861593ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-749436 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3225629068/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo352786870/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo352786870/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo352786870/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T" /mount1: exit status 1 (417.329993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 08:40:27.463356  439524 retry.go:31] will retry after 633.701446ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-749436 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-749436 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo352786870/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo352786870/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-749436 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo352786870/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-749436
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-749436
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-749436
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (122.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1124 08:42:06.443510  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:26.781063  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:26.787542  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:26.798953  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:26.820417  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:26.861883  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:26.943371  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:27.104935  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:27.426671  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:28.068224  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:29.349659  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:31.911637  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:34.146904  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:42:37.033620  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m1.324647716s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (122.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 kubectl -- rollout status deployment/busybox: (3.648918813s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-54pth -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-5w6st -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-pvng6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-54pth -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-5w6st -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-pvng6 -- nslookup kubernetes.default
E1124 08:42:47.275220  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-54pth -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-5w6st -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-pvng6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-54pth -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-54pth -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-5w6st -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-5w6st -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-pvng6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 kubectl -- exec busybox-7b57f96db7-pvng6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 node add --alsologtostderr -v 5
E1124 08:43:07.757071  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 node add --alsologtostderr -v 5: (23.533851488s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-733307 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp testdata/cp-test.txt ha-733307:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1686855896/001/cp-test_ha-733307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307:/home/docker/cp-test.txt ha-733307-m02:/home/docker/cp-test_ha-733307_ha-733307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m02 "sudo cat /home/docker/cp-test_ha-733307_ha-733307-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307:/home/docker/cp-test.txt ha-733307-m03:/home/docker/cp-test_ha-733307_ha-733307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m03 "sudo cat /home/docker/cp-test_ha-733307_ha-733307-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307:/home/docker/cp-test.txt ha-733307-m04:/home/docker/cp-test_ha-733307_ha-733307-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m04 "sudo cat /home/docker/cp-test_ha-733307_ha-733307-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp testdata/cp-test.txt ha-733307-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1686855896/001/cp-test_ha-733307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m02:/home/docker/cp-test.txt ha-733307:/home/docker/cp-test_ha-733307-m02_ha-733307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307 "sudo cat /home/docker/cp-test_ha-733307-m02_ha-733307.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m02:/home/docker/cp-test.txt ha-733307-m03:/home/docker/cp-test_ha-733307-m02_ha-733307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m03 "sudo cat /home/docker/cp-test_ha-733307-m02_ha-733307-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m02:/home/docker/cp-test.txt ha-733307-m04:/home/docker/cp-test_ha-733307-m02_ha-733307-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m04 "sudo cat /home/docker/cp-test_ha-733307-m02_ha-733307-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp testdata/cp-test.txt ha-733307-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1686855896/001/cp-test_ha-733307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m03:/home/docker/cp-test.txt ha-733307:/home/docker/cp-test_ha-733307-m03_ha-733307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307 "sudo cat /home/docker/cp-test_ha-733307-m03_ha-733307.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m03:/home/docker/cp-test.txt ha-733307-m02:/home/docker/cp-test_ha-733307-m03_ha-733307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m02 "sudo cat /home/docker/cp-test_ha-733307-m03_ha-733307-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m03:/home/docker/cp-test.txt ha-733307-m04:/home/docker/cp-test_ha-733307-m03_ha-733307-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m04 "sudo cat /home/docker/cp-test_ha-733307-m03_ha-733307-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp testdata/cp-test.txt ha-733307-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1686855896/001/cp-test_ha-733307-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m04:/home/docker/cp-test.txt ha-733307:/home/docker/cp-test_ha-733307-m04_ha-733307.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307 "sudo cat /home/docker/cp-test_ha-733307-m04_ha-733307.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m04:/home/docker/cp-test.txt ha-733307-m02:/home/docker/cp-test_ha-733307-m04_ha-733307-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m02 "sudo cat /home/docker/cp-test_ha-733307-m04_ha-733307-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 cp ha-733307-m04:/home/docker/cp-test.txt ha-733307-m03:/home/docker/cp-test_ha-733307-m04_ha-733307-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 ssh -n ha-733307-m03 "sudo cat /home/docker/cp-test_ha-733307-m04_ha-733307-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 node stop m02 --alsologtostderr -v 5: (12.005510358s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5: exit status 7 (702.474862ms)

                                                
                                                
-- stdout --
	ha-733307
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-733307-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-733307-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-733307-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:43:44.402768  530821 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:43:44.402978  530821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:43:44.402985  530821 out.go:374] Setting ErrFile to fd 2...
	I1124 08:43:44.402989  530821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:43:44.403208  530821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:43:44.403382  530821 out.go:368] Setting JSON to false
	I1124 08:43:44.403407  530821 mustload.go:66] Loading cluster: ha-733307
	I1124 08:43:44.403556  530821 notify.go:221] Checking for updates...
	I1124 08:43:44.403783  530821 config.go:182] Loaded profile config "ha-733307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 08:43:44.403804  530821 status.go:174] checking status of ha-733307 ...
	I1124 08:43:44.404274  530821 cli_runner.go:164] Run: docker container inspect ha-733307 --format={{.State.Status}}
	I1124 08:43:44.423339  530821 status.go:371] ha-733307 host status = "Running" (err=<nil>)
	I1124 08:43:44.423367  530821 host.go:66] Checking if "ha-733307" exists ...
	I1124 08:43:44.423690  530821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-733307
	I1124 08:43:44.440754  530821 host.go:66] Checking if "ha-733307" exists ...
	I1124 08:43:44.440999  530821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 08:43:44.441046  530821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-733307
	I1124 08:43:44.458691  530821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/ha-733307/id_rsa Username:docker}
	I1124 08:43:44.557929  530821 ssh_runner.go:195] Run: systemctl --version
	I1124 08:43:44.564556  530821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 08:43:44.576720  530821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:43:44.638912  530821 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 08:43:44.628702181 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:43:44.639617  530821 kubeconfig.go:125] found "ha-733307" server: "https://192.168.49.254:8443"
	I1124 08:43:44.639653  530821 api_server.go:166] Checking apiserver status ...
	I1124 08:43:44.639694  530821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 08:43:44.652107  530821 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W1124 08:43:44.660532  530821 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 08:43:44.660581  530821 ssh_runner.go:195] Run: ls
	I1124 08:43:44.664378  530821 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 08:43:44.670134  530821 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 08:43:44.670160  530821 status.go:463] ha-733307 apiserver status = Running (err=<nil>)
	I1124 08:43:44.670171  530821 status.go:176] ha-733307 status: &{Name:ha-733307 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 08:43:44.670186  530821 status.go:174] checking status of ha-733307-m02 ...
	I1124 08:43:44.670420  530821 cli_runner.go:164] Run: docker container inspect ha-733307-m02 --format={{.State.Status}}
	I1124 08:43:44.688197  530821 status.go:371] ha-733307-m02 host status = "Stopped" (err=<nil>)
	I1124 08:43:44.688217  530821 status.go:384] host is not running, skipping remaining checks
	I1124 08:43:44.688225  530821 status.go:176] ha-733307-m02 status: &{Name:ha-733307-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 08:43:44.688249  530821 status.go:174] checking status of ha-733307-m03 ...
	I1124 08:43:44.688593  530821 cli_runner.go:164] Run: docker container inspect ha-733307-m03 --format={{.State.Status}}
	I1124 08:43:44.705798  530821 status.go:371] ha-733307-m03 host status = "Running" (err=<nil>)
	I1124 08:43:44.705818  530821 host.go:66] Checking if "ha-733307-m03" exists ...
	I1124 08:43:44.706044  530821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-733307-m03
	I1124 08:43:44.723109  530821 host.go:66] Checking if "ha-733307-m03" exists ...
	I1124 08:43:44.723437  530821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 08:43:44.723512  530821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-733307-m03
	I1124 08:43:44.740867  530821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/ha-733307-m03/id_rsa Username:docker}
	I1124 08:43:44.840037  530821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 08:43:44.852781  530821 kubeconfig.go:125] found "ha-733307" server: "https://192.168.49.254:8443"
	I1124 08:43:44.852816  530821 api_server.go:166] Checking apiserver status ...
	I1124 08:43:44.852848  530821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 08:43:44.864115  530821 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1290/cgroup
	W1124 08:43:44.872430  530821 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1290/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 08:43:44.872506  530821 ssh_runner.go:195] Run: ls
	I1124 08:43:44.876179  530821 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 08:43:44.881147  530821 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 08:43:44.881169  530821 status.go:463] ha-733307-m03 apiserver status = Running (err=<nil>)
	I1124 08:43:44.881176  530821 status.go:176] ha-733307-m03 status: &{Name:ha-733307-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 08:43:44.881195  530821 status.go:174] checking status of ha-733307-m04 ...
	I1124 08:43:44.881425  530821 cli_runner.go:164] Run: docker container inspect ha-733307-m04 --format={{.State.Status}}
	I1124 08:43:44.898630  530821 status.go:371] ha-733307-m04 host status = "Running" (err=<nil>)
	I1124 08:43:44.898653  530821 host.go:66] Checking if "ha-733307-m04" exists ...
	I1124 08:43:44.898909  530821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-733307-m04
	I1124 08:43:44.915862  530821 host.go:66] Checking if "ha-733307-m04" exists ...
	I1124 08:43:44.916085  530821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 08:43:44.916134  530821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-733307-m04
	I1124 08:43:44.932495  530821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/ha-733307-m04/id_rsa Username:docker}
	I1124 08:43:45.030547  530821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 08:43:45.042965  530821 status.go:176] ha-733307-m04 status: &{Name:ha-733307-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 node start m02 --alsologtostderr -v 5
E1124 08:43:48.718885  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 node start m02 --alsologtostderr -v 5: (8.0353983s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 stop --alsologtostderr -v 5: (37.196062054s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 start --wait true --alsologtostderr -v 5
E1124 08:44:55.255017  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:55.261487  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:55.272924  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:55.294430  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:55.336625  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:55.418155  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:55.580416  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:55.902361  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:56.544680  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:44:57.826619  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:45:00.388613  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:45:05.510145  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:45:10.641707  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:45:15.751786  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 start --wait true --alsologtostderr -v 5: (59.953594776s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 node delete m03 --alsologtostderr -v 5
E1124 08:45:36.234041  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 node delete m03 --alsologtostderr -v 5: (8.630313469s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 stop --alsologtostderr -v 5
E1124 08:46:17.195940  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 stop --alsologtostderr -v 5: (35.990099793s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5: exit status 7 (118.317137ms)

                                                
                                                
-- stdout --
	ha-733307
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-733307-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-733307-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:46:19.256504  547389 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:46:19.256778  547389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:19.256789  547389 out.go:374] Setting ErrFile to fd 2...
	I1124 08:46:19.256793  547389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:46:19.257011  547389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:46:19.257177  547389 out.go:368] Setting JSON to false
	I1124 08:46:19.257203  547389 mustload.go:66] Loading cluster: ha-733307
	I1124 08:46:19.257280  547389 notify.go:221] Checking for updates...
	I1124 08:46:19.257612  547389 config.go:182] Loaded profile config "ha-733307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 08:46:19.257638  547389 status.go:174] checking status of ha-733307 ...
	I1124 08:46:19.258104  547389 cli_runner.go:164] Run: docker container inspect ha-733307 --format={{.State.Status}}
	I1124 08:46:19.278697  547389 status.go:371] ha-733307 host status = "Stopped" (err=<nil>)
	I1124 08:46:19.278719  547389 status.go:384] host is not running, skipping remaining checks
	I1124 08:46:19.278725  547389 status.go:176] ha-733307 status: &{Name:ha-733307 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 08:46:19.278745  547389 status.go:174] checking status of ha-733307-m02 ...
	I1124 08:46:19.278968  547389 cli_runner.go:164] Run: docker container inspect ha-733307-m02 --format={{.State.Status}}
	I1124 08:46:19.296359  547389 status.go:371] ha-733307-m02 host status = "Stopped" (err=<nil>)
	I1124 08:46:19.296438  547389 status.go:384] host is not running, skipping remaining checks
	I1124 08:46:19.296449  547389 status.go:176] ha-733307-m02 status: &{Name:ha-733307-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 08:46:19.296491  547389 status.go:174] checking status of ha-733307-m04 ...
	I1124 08:46:19.296762  547389 cli_runner.go:164] Run: docker container inspect ha-733307-m04 --format={{.State.Status}}
	I1124 08:46:19.313782  547389 status.go:371] ha-733307-m04 host status = "Stopped" (err=<nil>)
	I1124 08:46:19.313802  547389 status.go:384] host is not running, skipping remaining checks
	I1124 08:46:19.313808  547389 status.go:176] ha-733307-m04 status: &{Name:ha-733307-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1124 08:47:06.443713  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (57.024852696s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 node add --control-plane --alsologtostderr -v 5
E1124 08:47:26.781747  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:47:39.118655  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 08:47:54.483771  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-733307 node add --control-plane --alsologtostderr -v 5: (40.937407644s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-733307 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-229791 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-229791 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (39.414918587s)
--- PASS: TestJSONOutput/start/Command (39.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-229791 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-229791 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-229791 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-229791 --output=json --user=testUser: (5.851726535s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-474663 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-474663 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.654922ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"932e4c15-341b-454f-b1b4-6e47569ffddd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-474663] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"82fdde1e-a689-4c6a-adbc-3fac3f5afaee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21978"}}
	{"specversion":"1.0","id":"6b66caf8-e120-448f-b6e6-a39565b262bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4fbbca92-7d9b-40a4-9442-e1c4911ef30a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig"}}
	{"specversion":"1.0","id":"515f31ae-a2ff-46d6-a3a9-aad2e9925df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube"}}
	{"specversion":"1.0","id":"bb18508b-06be-47cf-985a-fb2d5d3e4125","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e52b44d4-debd-415e-9c93-a99803c9e03f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"418f73aa-8c10-4142-a19a-4ae70c9cef8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-474663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-474663
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-116634 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-116634 --network=: (31.543381722s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-116634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-116634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-116634: (2.115300179s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-339305 --network=bridge
E1124 08:49:55.262644  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-339305 --network=bridge: (22.491027615s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-339305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-339305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-339305: (2.011602455s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.52s)

                                                
                                    
x
+
TestKicExistingNetwork (26.48s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 08:49:58.851004  439524 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 08:49:58.867486  439524 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 08:49:58.867557  439524 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 08:49:58.867580  439524 cli_runner.go:164] Run: docker network inspect existing-network
W1124 08:49:58.883226  439524 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 08:49:58.883258  439524 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 08:49:58.883280  439524 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 08:49:58.883443  439524 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 08:49:58.900618  439524 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c654f70fdf0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:f7:ca:91:9d:ad} reservation:<nil>}
I1124 08:49:58.901049  439524 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee7410}
I1124 08:49:58.901084  439524 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 08:49:58.901140  439524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 08:49:58.947278  439524 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-306074 --network=existing-network
E1124 08:50:22.960367  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-306074 --network=existing-network: (24.361630324s)
helpers_test.go:175: Cleaning up "existing-network-306074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-306074
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-306074: (1.991760134s)
I1124 08:50:25.317890  439524 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.48s)

                                                
                                    
x
+
TestKicCustomSubnet (26.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-344333 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-344333 --subnet=192.168.60.0/24: (24.409781132s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-344333 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-344333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-344333
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-344333: (2.110192486s)
--- PASS: TestKicCustomSubnet (26.54s)

                                                
                                    
x
+
TestKicStaticIP (27.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-671830 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-671830 --static-ip=192.168.200.200: (24.962585814s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-671830 ip
helpers_test.go:175: Cleaning up "static-ip-671830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-671830
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-671830: (2.115308004s)
--- PASS: TestKicStaticIP (27.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (53.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-802471 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-802471 --driver=docker  --container-runtime=containerd: (23.828786481s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-805712 --driver=docker  --container-runtime=containerd
E1124 08:52:06.443139  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-805712 --driver=docker  --container-runtime=containerd: (23.990298356s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-802471
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-805712
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-805712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-805712
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-805712: (1.986469505s)
helpers_test.go:175: Cleaning up "first-802471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-802471
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-802471: (2.347502584s)
--- PASS: TestMinikubeProfile (53.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-982454 --memory=3072 --mount-string /tmp/TestMountStartserial2085336978/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-982454 --memory=3072 --mount-string /tmp/TestMountStartserial2085336978/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.358257252s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-982454 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-011594 --memory=3072 --mount-string /tmp/TestMountStartserial2085336978/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-011594 --memory=3072 --mount-string /tmp/TestMountStartserial2085336978/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.722338488s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-011594 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-982454 --alsologtostderr -v=5
E1124 08:52:26.781778  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-982454 --alsologtostderr -v=5: (1.722435223s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-011594 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-011594
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-011594: (1.272615374s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-011594
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-011594: (6.967586285s)
--- PASS: TestMountStart/serial/RestartStopped (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-011594 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-047371 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1124 08:53:29.508636  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-047371 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.728636604s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-047371 -- rollout status deployment/busybox: (3.819413885s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-4q4zj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-vzmlc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-4q4zj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-vzmlc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-4q4zj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-vzmlc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-4q4zj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-4q4zj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-vzmlc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-047371 -- exec busybox-7b57f96db7-vzmlc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-047371 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-047371 -v=5 --alsologtostderr: (25.287635632s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-047371 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp testdata/cp-test.txt multinode-047371:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2088117217/001/cp-test_multinode-047371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371:/home/docker/cp-test.txt multinode-047371-m02:/home/docker/cp-test_multinode-047371_multinode-047371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m02 "sudo cat /home/docker/cp-test_multinode-047371_multinode-047371-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371:/home/docker/cp-test.txt multinode-047371-m03:/home/docker/cp-test_multinode-047371_multinode-047371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m03 "sudo cat /home/docker/cp-test_multinode-047371_multinode-047371-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp testdata/cp-test.txt multinode-047371-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2088117217/001/cp-test_multinode-047371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371-m02:/home/docker/cp-test.txt multinode-047371:/home/docker/cp-test_multinode-047371-m02_multinode-047371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371 "sudo cat /home/docker/cp-test_multinode-047371-m02_multinode-047371.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371-m02:/home/docker/cp-test.txt multinode-047371-m03:/home/docker/cp-test_multinode-047371-m02_multinode-047371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m03 "sudo cat /home/docker/cp-test_multinode-047371-m02_multinode-047371-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp testdata/cp-test.txt multinode-047371-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2088117217/001/cp-test_multinode-047371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371-m03:/home/docker/cp-test.txt multinode-047371:/home/docker/cp-test_multinode-047371-m03_multinode-047371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371 "sudo cat /home/docker/cp-test_multinode-047371-m03_multinode-047371.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 cp multinode-047371-m03:/home/docker/cp-test.txt multinode-047371-m02:/home/docker/cp-test_multinode-047371-m03_multinode-047371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 ssh -n multinode-047371-m02 "sudo cat /home/docker/cp-test_multinode-047371-m03_multinode-047371-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-047371 node stop m03: (1.270821844s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-047371 status: exit status 7 (507.182441ms)

                                                
                                                
-- stdout --
	multinode-047371
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-047371-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-047371-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-047371 status --alsologtostderr: exit status 7 (509.061613ms)

                                                
                                                
-- stdout --
	multinode-047371
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-047371-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-047371-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:54:28.724359  610420 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:54:28.724535  610420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:54:28.724547  610420 out.go:374] Setting ErrFile to fd 2...
	I1124 08:54:28.724553  610420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:54:28.724793  610420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:54:28.724966  610420 out.go:368] Setting JSON to false
	I1124 08:54:28.724996  610420 mustload.go:66] Loading cluster: multinode-047371
	I1124 08:54:28.725127  610420 notify.go:221] Checking for updates...
	I1124 08:54:28.725395  610420 config.go:182] Loaded profile config "multinode-047371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 08:54:28.725420  610420 status.go:174] checking status of multinode-047371 ...
	I1124 08:54:28.725912  610420 cli_runner.go:164] Run: docker container inspect multinode-047371 --format={{.State.Status}}
	I1124 08:54:28.746370  610420 status.go:371] multinode-047371 host status = "Running" (err=<nil>)
	I1124 08:54:28.746409  610420 host.go:66] Checking if "multinode-047371" exists ...
	I1124 08:54:28.746740  610420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-047371
	I1124 08:54:28.764125  610420 host.go:66] Checking if "multinode-047371" exists ...
	I1124 08:54:28.764450  610420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 08:54:28.764524  610420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-047371
	I1124 08:54:28.783256  610420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/multinode-047371/id_rsa Username:docker}
	I1124 08:54:28.882098  610420 ssh_runner.go:195] Run: systemctl --version
	I1124 08:54:28.888601  610420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 08:54:28.900692  610420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 08:54:28.960645  610420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-24 08:54:28.950838533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 08:54:28.961356  610420 kubeconfig.go:125] found "multinode-047371" server: "https://192.168.67.2:8443"
	I1124 08:54:28.961392  610420 api_server.go:166] Checking apiserver status ...
	I1124 08:54:28.961446  610420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 08:54:28.973519  610420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1336/cgroup
	W1124 08:54:28.981892  610420 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1336/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 08:54:28.981941  610420 ssh_runner.go:195] Run: ls
	I1124 08:54:28.985710  610420 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 08:54:28.989837  610420 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 08:54:28.989858  610420 status.go:463] multinode-047371 apiserver status = Running (err=<nil>)
	I1124 08:54:28.989867  610420 status.go:176] multinode-047371 status: &{Name:multinode-047371 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 08:54:28.989885  610420 status.go:174] checking status of multinode-047371-m02 ...
	I1124 08:54:28.990123  610420 cli_runner.go:164] Run: docker container inspect multinode-047371-m02 --format={{.State.Status}}
	I1124 08:54:29.007213  610420 status.go:371] multinode-047371-m02 host status = "Running" (err=<nil>)
	I1124 08:54:29.007240  610420 host.go:66] Checking if "multinode-047371-m02" exists ...
	I1124 08:54:29.007536  610420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-047371-m02
	I1124 08:54:29.024432  610420 host.go:66] Checking if "multinode-047371-m02" exists ...
	I1124 08:54:29.024701  610420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 08:54:29.024751  610420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-047371-m02
	I1124 08:54:29.042486  610420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21978-435860/.minikube/machines/multinode-047371-m02/id_rsa Username:docker}
	I1124 08:54:29.141811  610420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 08:54:29.154054  610420 status.go:176] multinode-047371-m02 status: &{Name:multinode-047371-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 08:54:29.154087  610420 status.go:174] checking status of multinode-047371-m03 ...
	I1124 08:54:29.154380  610420 cli_runner.go:164] Run: docker container inspect multinode-047371-m03 --format={{.State.Status}}
	I1124 08:54:29.172689  610420 status.go:371] multinode-047371-m03 host status = "Stopped" (err=<nil>)
	I1124 08:54:29.172711  610420 status.go:384] host is not running, skipping remaining checks
	I1124 08:54:29.172718  610420 status.go:176] multinode-047371-m03 status: &{Name:multinode-047371-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-047371 node start m03 -v=5 --alsologtostderr: (6.504467147s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-047371
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-047371
E1124 08:54:55.259572  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-047371: (25.03517889s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-047371 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-047371 --wait=true -v=5 --alsologtostderr: (52.829159112s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-047371
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-047371 node delete m03: (4.663745726s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-047371 stop: (23.82470458s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-047371 status: exit status 7 (100.062952ms)

                                                
                                                
-- stdout --
	multinode-047371
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-047371-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-047371 status --alsologtostderr: exit status 7 (99.658322ms)

                                                
                                                
-- stdout --
	multinode-047371
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-047371-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 08:56:23.656163  620269 out.go:360] Setting OutFile to fd 1 ...
	I1124 08:56:23.656275  620269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:56:23.656284  620269 out.go:374] Setting ErrFile to fd 2...
	I1124 08:56:23.656287  620269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 08:56:23.656513  620269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 08:56:23.656681  620269 out.go:368] Setting JSON to false
	I1124 08:56:23.656705  620269 mustload.go:66] Loading cluster: multinode-047371
	I1124 08:56:23.656839  620269 notify.go:221] Checking for updates...
	I1124 08:56:23.657127  620269 config.go:182] Loaded profile config "multinode-047371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 08:56:23.657150  620269 status.go:174] checking status of multinode-047371 ...
	I1124 08:56:23.658134  620269 cli_runner.go:164] Run: docker container inspect multinode-047371 --format={{.State.Status}}
	I1124 08:56:23.676286  620269 status.go:371] multinode-047371 host status = "Stopped" (err=<nil>)
	I1124 08:56:23.676327  620269 status.go:384] host is not running, skipping remaining checks
	I1124 08:56:23.676342  620269 status.go:176] multinode-047371 status: &{Name:multinode-047371 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 08:56:23.676390  620269 status.go:174] checking status of multinode-047371-m02 ...
	I1124 08:56:23.676708  620269 cli_runner.go:164] Run: docker container inspect multinode-047371-m02 --format={{.State.Status}}
	I1124 08:56:23.694846  620269 status.go:371] multinode-047371-m02 host status = "Stopped" (err=<nil>)
	I1124 08:56:23.694885  620269 status.go:384] host is not running, skipping remaining checks
	I1124 08:56:23.694897  620269 status.go:176] multinode-047371-m02 status: &{Name:multinode-047371-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-047371 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1124 08:57:06.443770  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-047371 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.070346392s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-047371 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-047371
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-047371-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-047371-m02 --driver=docker  --container-runtime=containerd: exit status 14 (73.451919ms)

                                                
                                                
-- stdout --
	* [multinode-047371-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-047371-m02' is duplicated with machine name 'multinode-047371-m02' in profile 'multinode-047371'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-047371-m03 --driver=docker  --container-runtime=containerd
E1124 08:57:26.781413  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-047371-m03 --driver=docker  --container-runtime=containerd: (21.810613425s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-047371
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-047371: exit status 80 (304.105151ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-047371 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-047371-m03 already exists in multinode-047371-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-047371-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-047371-m03: (1.986344297s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.23s)

                                                
                                    
x
+
TestPreload (119.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-338915 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-338915 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (46.749047471s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-338915 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-338915 image pull gcr.io/k8s-minikube/busybox: (2.564169425s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-338915
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-338915: (5.698061507s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-338915 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1124 08:58:49.845393  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-338915 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m2.122857388s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-338915 image list
helpers_test.go:175: Cleaning up "test-preload-338915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-338915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-338915: (2.462735476s)
--- PASS: TestPreload (119.83s)

                                                
                                    
x
+
TestScheduledStopUnix (97.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-392093 --memory=3072 --driver=docker  --container-runtime=containerd
E1124 08:59:55.259852  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-392093 --memory=3072 --driver=docker  --container-runtime=containerd: (20.642278756s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-392093 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:00:02.391572  638676 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:00:02.391842  638676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:00:02.391852  638676 out.go:374] Setting ErrFile to fd 2...
	I1124 09:00:02.391855  638676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:00:02.392054  638676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:00:02.392295  638676 out.go:368] Setting JSON to false
	I1124 09:00:02.392381  638676 mustload.go:66] Loading cluster: scheduled-stop-392093
	I1124 09:00:02.392756  638676 config.go:182] Loaded profile config "scheduled-stop-392093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:00:02.392844  638676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/config.json ...
	I1124 09:00:02.393059  638676 mustload.go:66] Loading cluster: scheduled-stop-392093
	I1124 09:00:02.393203  638676 config.go:182] Loaded profile config "scheduled-stop-392093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-392093 -n scheduled-stop-392093
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-392093 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:00:02.791291  638826 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:00:02.791383  638826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:00:02.791388  638826 out.go:374] Setting ErrFile to fd 2...
	I1124 09:00:02.791393  638826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:00:02.791609  638826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:00:02.791859  638826 out.go:368] Setting JSON to false
	I1124 09:00:02.792045  638826 daemonize_unix.go:73] killing process 638710 as it is an old scheduled stop
	I1124 09:00:02.792152  638826 mustload.go:66] Loading cluster: scheduled-stop-392093
	I1124 09:00:02.792498  638826 config.go:182] Loaded profile config "scheduled-stop-392093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:00:02.792561  638826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/config.json ...
	I1124 09:00:02.792731  638826 mustload.go:66] Loading cluster: scheduled-stop-392093
	I1124 09:00:02.792822  638826 config.go:182] Loaded profile config "scheduled-stop-392093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 09:00:02.799402  439524 retry.go:31] will retry after 84.549µs: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.800533  439524 retry.go:31] will retry after 223.253µs: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.801665  439524 retry.go:31] will retry after 124.448µs: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.802808  439524 retry.go:31] will retry after 215.717µs: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.803939  439524 retry.go:31] will retry after 331.015µs: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.805079  439524 retry.go:31] will retry after 475.497µs: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.806204  439524 retry.go:31] will retry after 826.617µs: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.807339  439524 retry.go:31] will retry after 1.625929ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.809536  439524 retry.go:31] will retry after 2.955273ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.812735  439524 retry.go:31] will retry after 3.685013ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.816937  439524 retry.go:31] will retry after 5.356213ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.823202  439524 retry.go:31] will retry after 8.140456ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.832476  439524 retry.go:31] will retry after 18.337912ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.851719  439524 retry.go:31] will retry after 22.089141ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.873914  439524 retry.go:31] will retry after 18.690132ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
I1124 09:00:02.893165  439524 retry.go:31] will retry after 22.431055ms: open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-392093 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-392093 -n scheduled-stop-392093
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-392093
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-392093 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 09:00:28.715088  639718 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:00:28.715242  639718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:00:28.715253  639718 out.go:374] Setting ErrFile to fd 2...
	I1124 09:00:28.715260  639718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:00:28.715525  639718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:00:28.715814  639718 out.go:368] Setting JSON to false
	I1124 09:00:28.715914  639718 mustload.go:66] Loading cluster: scheduled-stop-392093
	I1124 09:00:28.716282  639718 config.go:182] Loaded profile config "scheduled-stop-392093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:00:28.716369  639718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/scheduled-stop-392093/config.json ...
	I1124 09:00:28.716591  639718 mustload.go:66] Loading cluster: scheduled-stop-392093
	I1124 09:00:28.716714  639718 config.go:182] Loaded profile config "scheduled-stop-392093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-392093
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-392093: exit status 7 (80.621457ms)

                                                
                                                
-- stdout --
	scheduled-stop-392093
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-392093 -n scheduled-stop-392093
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-392093 -n scheduled-stop-392093: exit status 7 (79.944068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-392093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-392093
E1124 09:01:18.321880  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-392093: (5.558969679s)
--- PASS: TestScheduledStopUnix (97.76s)

                                                
                                    
x
+
TestInsufficientStorage (11.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-681681 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-681681 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.125972132s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"45725bf5-8761-43d3-9c5d-d5db14dacba2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-681681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a30d8ead-097d-4fe0-87f9-aab9eb277d12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21978"}}
	{"specversion":"1.0","id":"ba3afcf3-c01c-4d7a-aa30-5d038d50c020","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b5d88980-297a-4b78-a1c4-09c279f4fbf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig"}}
	{"specversion":"1.0","id":"e2170f55-ecf5-4ac6-8077-1c75487c09d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube"}}
	{"specversion":"1.0","id":"a64a18b7-ee5c-47e0-9392-41e766d1ecdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9545253b-9f59-4de8-8532-9688396acc49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d93b57f-85f3-4a8d-a3fc-724745798625","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6f0771e0-f11f-456c-8030-1d73e9b76fb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cbe02b43-8759-460c-aa60-750d55b4c486","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"10b9a343-55d4-4b11-abea-14e7b00c43a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a058cf94-d916-4496-bf40-839d1dda791a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-681681\" primary control-plane node in \"insufficient-storage-681681\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4209199c-1c31-49b0-9af1-fb71c54d13e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"93949a2f-0dc3-46a6-9715-aec934bccf2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8c9e648-fafb-4b22-b93c-8966c0119a86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-681681 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-681681 --output=json --layout=cluster: exit status 7 (307.225717ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-681681","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-681681","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 09:01:28.869361  642023 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-681681" does not appear in /home/jenkins/minikube-integration/21978-435860/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-681681 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-681681 --output=json --layout=cluster: exit status 7 (305.533512ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-681681","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-681681","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 09:01:29.175326  642149 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-681681" does not appear in /home/jenkins/minikube-integration/21978-435860/kubeconfig
	E1124 09:01:29.186514  642149 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/insufficient-storage-681681/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-681681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-681681
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-681681: (1.921503396s)
--- PASS: TestInsufficientStorage (11.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.86s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3717026638 start -p running-upgrade-180227 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3717026638 start -p running-upgrade-180227 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (28.130001002s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-180227 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-180227 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.270534613s)
helpers_test.go:175: Cleaning up "running-upgrade-180227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-180227
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-180227: (2.389697793s)
--- PASS: TestRunningBinaryUpgrade (55.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (306.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-521313 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-521313 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.020226196s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-521313
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-521313: (1.296517567s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-521313 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-521313 status --format={{.Host}}: exit status 7 (88.719939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-521313 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-521313 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m30.581068999s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-521313 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-521313 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-521313 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (91.462977ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-521313] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-521313
	    minikube start -p kubernetes-upgrade-521313 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5213132 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-521313 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-521313 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-521313 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.416093063s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-521313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-521313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-521313: (2.893078657s)
--- PASS: TestKubernetesUpgrade (306.45s)

                                                
                                    
x
+
TestMissingContainerUpgrade (94.87s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2314433213 start -p missing-upgrade-058813 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2314433213 start -p missing-upgrade-058813 --memory=3072 --driver=docker  --container-runtime=containerd: (21.530870087s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-058813
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-058813
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-058813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-058813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.154529098s)
helpers_test.go:175: Cleaning up "missing-upgrade-058813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-058813
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-058813: (2.398946456s)
--- PASS: TestMissingContainerUpgrade (94.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.24s)

                                                
                                    
x
+
TestPause/serial/Start (53.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145027 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-145027 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (53.476184042s)
--- PASS: TestPause/serial/Start (53.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1065253849 start -p stopped-upgrade-187531 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1065253849 start -p stopped-upgrade-187531 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m22.610987293s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1065253849 -p stopped-upgrade-187531 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1065253849 -p stopped-upgrade-187531 stop: (1.271699514s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-187531 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-187531 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.623146009s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.51s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (10.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145027 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1124 09:02:26.781703  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-145027 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (10.185939017s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (10.20s)

                                                
                                    
x
+
TestPause/serial/Pause (1.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-145027 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-145027 --alsologtostderr -v=5: (1.859305122s)
--- PASS: TestPause/serial/Pause (1.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-145027 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-145027 --output=json --layout=cluster: exit status 2 (441.425868ms)

                                                
                                                
-- stdout --
	{"Name":"pause-145027","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-145027","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-145027 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-145027 --alsologtostderr -v=5: (1.062689763s)
--- PASS: TestPause/serial/Unpause (1.06s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-145027 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-145027 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-145027 --alsologtostderr -v=5: (2.906143426s)
--- PASS: TestPause/serial/DeletePaused (2.91s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.91s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-145027
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-145027: exit status 1 (23.529637ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-145027: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-187531
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-187531: (1.191892818s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-447421 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-447421 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (86.146461ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-447421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-447421 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-447421 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.915442117s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-447421 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-447421 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-447421 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (20.073761428s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-447421 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-447421 status -o json: exit status 2 (299.574075ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-447421","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-447421
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-447421: (2.003361606s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-447421 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-447421 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.899565451s)
--- PASS: TestNoKubernetes/serial/Start (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21978-435860/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-447421 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-447421 "sudo systemctl is-active --quiet service kubelet": exit status 1 (325.147519ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-447421
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-447421: (2.144942767s)
--- PASS: TestNoKubernetes/serial/Stop (2.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-447421 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-447421 --driver=docker  --container-runtime=containerd: (6.525859783s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-447421 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-447421 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.197132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-203355 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-203355 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (170.677524ms)

                                                
                                                
-- stdout --
	* [false-203355] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21978
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 09:04:39.049082  693641 out.go:360] Setting OutFile to fd 1 ...
	I1124 09:04:39.049385  693641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:39.049397  693641 out.go:374] Setting ErrFile to fd 2...
	I1124 09:04:39.049401  693641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 09:04:39.049640  693641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21978-435860/.minikube/bin
	I1124 09:04:39.050269  693641 out.go:368] Setting JSON to false
	I1124 09:04:39.051813  693641 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13615,"bootTime":1763961464,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 09:04:39.051936  693641 start.go:143] virtualization: kvm guest
	I1124 09:04:39.053754  693641 out.go:179] * [false-203355] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 09:04:39.054988  693641 out.go:179]   - MINIKUBE_LOCATION=21978
	I1124 09:04:39.055005  693641 notify.go:221] Checking for updates...
	I1124 09:04:39.056748  693641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 09:04:39.057748  693641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21978-435860/kubeconfig
	I1124 09:04:39.058758  693641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21978-435860/.minikube
	I1124 09:04:39.059702  693641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 09:04:39.060725  693641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 09:04:39.062290  693641 config.go:182] Loaded profile config "cert-expiration-869306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1124 09:04:39.062445  693641 config.go:182] Loaded profile config "kubernetes-upgrade-521313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1124 09:04:39.062615  693641 config.go:182] Loaded profile config "missing-upgrade-058813": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1124 09:04:39.062755  693641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 09:04:39.087639  693641 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 09:04:39.087741  693641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 09:04:39.150202  693641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-24 09:04:39.140449958 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 09:04:39.150315  693641 docker.go:319] overlay module found
	I1124 09:04:39.151831  693641 out.go:179] * Using the docker driver based on user configuration
	I1124 09:04:39.152914  693641 start.go:309] selected driver: docker
	I1124 09:04:39.152930  693641 start.go:927] validating driver "docker" against <nil>
	I1124 09:04:39.152941  693641 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 09:04:39.154496  693641 out.go:203] 
	W1124 09:04:39.155454  693641 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1124 09:04:39.157042  693641 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-203355 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-203355" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:02:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-869306
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:04:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-521313
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:03:36 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-058813
contexts:
- context:
cluster: cert-expiration-869306
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:02:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-869306
name: cert-expiration-869306
- context:
cluster: kubernetes-upgrade-521313
user: kubernetes-upgrade-521313
name: kubernetes-upgrade-521313
- context:
cluster: missing-upgrade-058813
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:03:36 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-058813
name: missing-upgrade-058813
current-context: kubernetes-upgrade-521313
kind: Config
users:
- name: cert-expiration-869306
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/cert-expiration-869306/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/cert-expiration-869306/client.key
- name: kubernetes-upgrade-521313
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/kubernetes-upgrade-521313/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/kubernetes-upgrade-521313/client.key
- name: missing-upgrade-058813
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/missing-upgrade-058813/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/missing-upgrade-058813/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-203355

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-203355"

                                                
                                                
----------------------- debugLogs end: false-203355 [took: 3.3382752s] --------------------------------
helpers_test.go:175: Cleaning up "false-203355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-203355
--- PASS: TestNetworkPlugins/group/false (3.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (47.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (47.728865382s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (47.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1124 09:04:55.255049  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-749436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (48.513513951s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-128377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-128377 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-128377 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-128377 --alsologtostderr -v=3: (12.089234415s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-820576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-820576 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-820576 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-820576 --alsologtostderr -v=3: (12.040510504s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-128377 -n old-k8s-version-128377
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-128377 -n old-k8s-version-128377: exit status 7 (95.821791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-128377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-128377 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.64599037s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-128377 -n old-k8s-version-128377
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820576 -n no-preload-820576
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820576 -n no-preload-820576: exit status 7 (92.757901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-820576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-820576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (48.298026449s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820576 -n no-preload-820576
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-841285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-841285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (46.604955487s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-qxspc" [f8639486-7c49-445d-8c07-2ad93084bd35] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003690227s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mlv6n" [a5228eb0-b749-47af-955c-fb4739f8e440] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004129222s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-qxspc" [f8639486-7c49-445d-8c07-2ad93084bd35] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00397711s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-820576 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mlv6n" [a5228eb0-b749-47af-955c-fb4739f8e440] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004256229s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-128377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-820576 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1124 09:07:02.713178  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:07:03.076916  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-820576 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820576 -n no-preload-820576
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820576 -n no-preload-820576: exit status 2 (394.587002ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-820576 -n no-preload-820576
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-820576 -n no-preload-820576: exit status 2 (374.881607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-820576 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820576 -n no-preload-820576
E1124 09:07:06.443526  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/addons-598179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-820576 -n no-preload-820576
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-128377 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-128377 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-128377 -n old-k8s-version-128377
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-128377 -n old-k8s-version-128377: exit status 2 (381.930919ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-128377 -n old-k8s-version-128377
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-128377 -n old-k8s-version-128377: exit status 2 (364.928564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-128377 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-128377 -n old-k8s-version-128377
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-128377 -n old-k8s-version-128377
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-841285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-841285 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-841285 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-841285 --alsologtostderr -v=3: (12.433156039s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-603918 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-603918 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (41.630140599s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-654569 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-654569 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (34.722465361s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-841285 -n embed-certs-841285
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-841285 -n embed-certs-841285: exit status 7 (112.924042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-841285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-841285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1124 09:07:26.781714  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/functional-850845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-841285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (47.84760673s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-841285 -n embed-certs-841285
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-654569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-654569 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-654569 --alsologtostderr -v=3: (1.432849486s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-654569 -n newest-cni-654569
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-654569 -n newest-cni-654569: exit status 7 (83.503829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-654569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-654569 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-654569 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (11.481610434s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-654569 -n newest-cni-654569
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-654569 image list --format=json
I1124 09:08:00.124272  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:08:00.488156  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
I1124 09:08:00.939636  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-654569 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-654569 -n newest-cni-654569
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-654569 -n newest-cni-654569: exit status 2 (318.058297ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-654569 -n newest-cni-654569
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-654569 -n newest-cni-654569: exit status 2 (318.818664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-654569 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-654569 -n newest-cni-654569
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-654569 -n newest-cni-654569
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-603918 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-603918 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (42.393820676s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-603918 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-603918 --alsologtostderr -v=3: (12.27015695s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4wwrm" [334a7da1-163e-40d4-bf7a-15d046b9ccc0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00380587s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4wwrm" [334a7da1-163e-40d4-bf7a-15d046b9ccc0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003208438s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-841285 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918: exit status 7 (97.655411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-603918 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-603918 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-603918 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (57.640519933s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-841285 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1124 09:08:21.560859  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:08:21.871306  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:08:22.186793  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-841285 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-841285 -n embed-certs-841285
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-841285 -n embed-certs-841285: exit status 2 (377.314871ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-841285 -n embed-certs-841285
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-841285 -n embed-certs-841285: exit status 2 (358.583465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-841285 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-841285 -n embed-certs-841285
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-841285 -n embed-certs-841285
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (48.143810249s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-203355 "pgrep -a kubelet"
I1124 09:08:49.550642  439524 config.go:182] Loaded profile config "auto-203355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-203355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2k47w" [303a5591-ab90-4c69-98d3-2c0f03eec9d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2k47w" [303a5591-ab90-4c69-98d3-2c0f03eec9d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004257843s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (53.08240758s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-203355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-894k2" [5a536fcb-8f7b-421b-a960-270be2c77deb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003290493s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zw6m2" [50ed7bdd-6e57-4b9d-8f03-8b359189f9d2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003304461s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.443978678s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-203355 "pgrep -a kubelet"
I1124 09:09:23.194088  439524 config.go:182] Loaded profile config "kindnet-203355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-203355 replace --force -f testdata/netcat-deployment.yaml
I1124 09:09:23.997138  439524 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5zgpb" [9df10639-e4b3-4198-91af-25ef00662dd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5zgpb" [9df10639-e4b3-4198-91af-25ef00662dd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.032147393s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zw6m2" [50ed7bdd-6e57-4b9d-8f03-8b359189f9d2] Running
I1124 09:09:24.495117  439524 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003898679s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-603918 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-603918 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I1124 09:09:29.528778  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:09:29.884188  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1124 09:09:30.217226  439524 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-603918 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918: exit status 2 (391.745984ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918: exit status 2 (384.289708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-603918 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-603918 -n default-k8s-diff-port-603918
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.45s)
E1124 09:10:54.935219  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-203355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m6.379611571s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-4gc6m" [575f8857-b04c-4857-aed3-e7a5bd05a2cb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-4gc6m" [575f8857-b04c-4857-aed3-e7a5bd05a2cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004860478s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-203355 "pgrep -a kubelet"
I1124 09:09:49.275246  439524 config.go:182] Loaded profile config "calico-203355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-203355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xxvwr" [36f5ddbe-b9ae-4902-b3b1-42c36863ddfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xxvwr" [36f5ddbe-b9ae-4902-b3b1-42c36863ddfa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004157048s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.172592078s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-203355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-203355 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (38.781413738s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-203355 "pgrep -a kubelet"
I1124 09:10:23.161391  439524 config.go:182] Loaded profile config "custom-flannel-203355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-203355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hmnrn" [2bb33490-b29c-410f-89f1-982652f0f436] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hmnrn" [2bb33490-b29c-410f-89f1-982652f0f436] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004884091s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-203355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-203355 "pgrep -a kubelet"
I1124 09:10:43.715628  439524 config.go:182] Loaded profile config "enable-default-cni-203355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-203355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6xxx9" [56da348b-3dae-4a98-bb03-487737bace7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 09:10:44.693011  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/old-k8s-version-128377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 09:10:46.529298  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6xxx9" [56da348b-3dae-4a98-bb03-487737bace7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003672677s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-jgvr2" [0ec53a45-3b82-4887-8891-17c3cd062d56] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00431471s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-203355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-203355 "pgrep -a kubelet"
I1124 09:10:56.255438  439524 config.go:182] Loaded profile config "flannel-203355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-203355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sk9lg" [33bd5546-b78e-4c7e-98a5-d96bef794af6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 09:10:56.771444  439524 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/no-preload-820576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-sk9lg" [33bd5546-b78e-4c7e-98a5-d96bef794af6] Running
I1124 09:10:59.537126  439524 config.go:182] Loaded profile config "bridge-203355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.002559281s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-203355 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-203355 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2vpdg" [90d59ec1-4eb9-47b5-9a25-1faf19c0aad0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2vpdg" [90d59ec1-4eb9-47b5-9a25-1faf19c0aad0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004534892s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-203355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-203355 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-203355 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (32/420)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.17
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
378 TestStartStop/group/disable-driver-mounts 0.19
393 TestNetworkPlugins/group/kubenet 3.42
401 TestNetworkPlugins/group/cilium 3.9
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1124 08:29:50.722343  439524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
W1124 08:29:50.863917  439524 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
W1124 08:29:50.896270  439524 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-598029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-598029
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-203355 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-203355" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:02:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-869306
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:04:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-521313
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:03:36 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-058813
contexts:
- context:
cluster: cert-expiration-869306
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:02:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-869306
name: cert-expiration-869306
- context:
cluster: kubernetes-upgrade-521313
user: kubernetes-upgrade-521313
name: kubernetes-upgrade-521313
- context:
cluster: missing-upgrade-058813
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:03:36 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-058813
name: missing-upgrade-058813
current-context: kubernetes-upgrade-521313
kind: Config
users:
- name: cert-expiration-869306
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/cert-expiration-869306/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/cert-expiration-869306/client.key
- name: kubernetes-upgrade-521313
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/kubernetes-upgrade-521313/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/kubernetes-upgrade-521313/client.key
- name: missing-upgrade-058813
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/missing-upgrade-058813/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/missing-upgrade-058813/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-203355

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-203355"

                                                
                                                
----------------------- debugLogs end: kubenet-203355 [took: 3.258789418s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-203355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-203355
--- SKIP: TestNetworkPlugins/group/kubenet (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-203355 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-203355" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:02:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-869306
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:04:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-521313
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21978-435860/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:04:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-058813
contexts:
- context:
cluster: cert-expiration-869306
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:02:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-869306
name: cert-expiration-869306
- context:
cluster: kubernetes-upgrade-521313
user: kubernetes-upgrade-521313
name: kubernetes-upgrade-521313
- context:
cluster: missing-upgrade-058813
extensions:
- extension:
last-update: Mon, 24 Nov 2025 09:04:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: missing-upgrade-058813
name: missing-upgrade-058813
current-context: missing-upgrade-058813
kind: Config
users:
- name: cert-expiration-869306
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/cert-expiration-869306/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/cert-expiration-869306/client.key
- name: kubernetes-upgrade-521313
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/kubernetes-upgrade-521313/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/kubernetes-upgrade-521313/client.key
- name: missing-upgrade-058813
user:
client-certificate: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/missing-upgrade-058813/client.crt
client-key: /home/jenkins/minikube-integration/21978-435860/.minikube/profiles/missing-upgrade-058813/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-203355

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-203355" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-203355"

                                                
                                                
----------------------- debugLogs end: cilium-203355 [took: 3.731439507s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-203355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-203355
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
Copied to clipboard