Test Report: Docker_Linux_containerd 21847

                    
                      fa4d670f7aa2bf54fac775fb3c292483f6687320:2025-11-21:42430
                    
                

Test fail (4/333)

Order failed test Duration
305 TestStartStop/group/old-k8s-version/serial/DeployApp 13.97
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.11
307 TestStartStop/group/no-preload/serial/DeployApp 13.03
348 TestStartStop/group/embed-certs/serial/DeployApp 14.3
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-012258 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fa895e52-0bff-4604-8b62-fd0f087015e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fa895e52-0bff-4604-8b62-fd0f087015e8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004215918s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-012258 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-012258
helpers_test.go:243: (dbg) docker inspect old-k8s-version-012258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d",
	        "Created": "2025-11-21T14:29:18.305605728Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:29:18.348841908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d/hosts",
	        "LogPath": "/var/lib/docker/containers/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d-json.log",
	        "Name": "/old-k8s-version-012258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-012258:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-012258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d",
	                "LowerDir": "/var/lib/docker/overlay2/4ea3913a068d8b871d800eefdd7cdd11e4851e7b5031ea166038678d2b0108e1-init/diff:/var/lib/docker/overlay2/a649757dd9587fa5a20ca8a56ec1923099f2a5e912dc7e8e1dfa08e79248b59f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4ea3913a068d8b871d800eefdd7cdd11e4851e7b5031ea166038678d2b0108e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4ea3913a068d8b871d800eefdd7cdd11e4851e7b5031ea166038678d2b0108e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4ea3913a068d8b871d800eefdd7cdd11e4851e7b5031ea166038678d2b0108e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-012258",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-012258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-012258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-012258",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-012258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "46765a8ec6da2ef06d0a63c5e792b68206b48e74aeaeb299bf506ff70e7dcffd",
	            "SandboxKey": "/var/run/docker/netns/46765a8ec6da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-012258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecee753316979a1bb886a50ec401a80f6274b9bc39c4a8bb1732e91064c178b9",
	                    "EndpointID": "c92e22445c114f178de1b5adf2a20b74000e44859ae25f57affa69d30eb60100",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "9e:cd:46:05:9b:55",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-012258",
	                        "b631b0b0e9d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012258 -n old-k8s-version-012258
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-012258 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-012258 logs -n 25: (1.212323377s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-459127 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo containerd config dump                                                                                                                                                                                                        │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cert-expiration-371956                                                                                                                                                                                                                           │ cert-expiration-371956       │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ -p cilium-459127 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo crio config                                                                                                                                                                                                                   │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cilium-459127                                                                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ start   │ -p cert-options-733993 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p force-systemd-flag-730471 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p NoKubernetes-187733 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │                     │
	│ delete  │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ cert-options-733993 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p cert-options-733993 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p cert-options-733993                                                                                                                                                                                                                              │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ force-systemd-flag-730471 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p force-systemd-flag-730471                                                                                                                                                                                                                        │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:29:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:29:24.877938  255774 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:29:24.878133  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.878179  255774 out.go:374] Setting ErrFile to fd 2...
	I1121 14:29:24.878200  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.879901  255774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:29:24.881344  255774 out.go:368] Setting JSON to false
	I1121 14:29:24.883254  255774 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4307,"bootTime":1763731058,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:29:24.883372  255774 start.go:143] virtualization: kvm guest
	I1121 14:29:24.885483  255774 out.go:179] * [default-k8s-diff-port-376255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:29:24.887201  255774 notify.go:221] Checking for updates...
	I1121 14:29:24.887242  255774 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:29:24.890729  255774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:29:24.892963  255774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:24.894677  255774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:29:24.897870  255774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:29:24.899765  255774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:29:24.902854  255774 config.go:182] Loaded profile config "kubernetes-upgrade-797080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903030  255774 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903162  255774 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:24.903312  255774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:29:24.939143  255774 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:29:24.939248  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.025144  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.01035373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.025295  255774 docker.go:319] overlay module found
	I1121 14:29:25.027378  255774 out.go:179] * Using the docker driver based on user configuration
	I1121 14:29:22.611340  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.611365  249617 ubuntu.go:182] provisioning hostname "old-k8s-version-012258"
	I1121 14:29:22.611426  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.635589  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.635869  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.635891  249617 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-012258 && echo "old-k8s-version-012258" | sudo tee /etc/hostname
	I1121 14:29:22.796661  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.796754  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.822578  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.822834  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.822860  249617 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-012258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-012258/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-012258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:22.970644  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:22.970676  249617 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:22.970732  249617 ubuntu.go:190] setting up certificates
	I1121 14:29:22.970743  249617 provision.go:84] configureAuth start
	I1121 14:29:22.970826  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:22.991118  249617 provision.go:143] copyHostCerts
	I1121 14:29:22.991183  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:22.991193  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:22.991250  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:22.991367  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:22.991381  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:22.991414  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:22.991488  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:22.991499  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:22.991526  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:22.991627  249617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-012258 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-012258]
	I1121 14:29:23.140756  249617 provision.go:177] copyRemoteCerts
	I1121 14:29:23.140833  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:23.140885  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.161751  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.269718  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:23.292619  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:29:23.314336  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:29:23.337086  249617 provision.go:87] duration metric: took 366.309314ms to configureAuth
	I1121 14:29:23.337129  249617 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:23.337306  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:23.337320  249617 machine.go:97] duration metric: took 3.89496072s to provisionDockerMachine
	I1121 14:29:23.337326  249617 client.go:176] duration metric: took 11.527957207s to LocalClient.Create
	I1121 14:29:23.337344  249617 start.go:167] duration metric: took 11.528071392s to libmachine.API.Create "old-k8s-version-012258"
	I1121 14:29:23.337352  249617 start.go:293] postStartSetup for "old-k8s-version-012258" (driver="docker")
	I1121 14:29:23.337365  249617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:23.337422  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:23.337471  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.359217  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.466089  249617 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:23.470146  249617 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:23.470174  249617 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:23.470185  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:23.470249  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:23.470349  249617 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:23.470480  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:23.479086  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:23.506776  249617 start.go:296] duration metric: took 169.402964ms for postStartSetup
	I1121 14:29:23.507166  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.527044  249617 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/config.json ...
	I1121 14:29:23.527374  249617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:23.527425  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.546669  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.645314  249617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:23.650498  249617 start.go:128] duration metric: took 11.844529266s to createHost
	I1121 14:29:23.650523  249617 start.go:83] releasing machines lock for "old-k8s-version-012258", held for 11.844683904s
	I1121 14:29:23.650592  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.671161  249617 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:23.671227  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.671321  249617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:23.671403  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.694189  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.694196  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.856609  249617 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:23.863273  249617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:23.867917  249617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:23.867991  249617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:23.895679  249617 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:23.895707  249617 start.go:496] detecting cgroup driver to use...
	I1121 14:29:23.895742  249617 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:23.895805  249617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:23.911897  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:23.925350  249617 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:23.925400  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:23.943424  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:23.962675  249617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:24.059689  249617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:24.169263  249617 docker.go:234] disabling docker service ...
	I1121 14:29:24.169325  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:24.191949  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:24.206181  249617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:24.319402  249617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:24.455060  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:24.472888  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:24.497138  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1121 14:29:24.524424  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:24.536491  249617 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:24.536702  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:24.547193  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.559919  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:24.571627  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.581977  249617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:24.629839  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:24.640310  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:24.650595  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:24.660801  249617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:24.669493  249617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:24.677810  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:24.781513  249617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:24.929576  249617 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:24.929707  249617 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:24.936782  249617 start.go:564] Will wait 60s for crictl version
	I1121 14:29:24.936893  249617 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.942453  249617 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:24.986447  249617 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:24.986527  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.018021  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.051308  249617 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1121 14:29:25.029036  255774 start.go:309] selected driver: docker
	I1121 14:29:25.029056  255774 start.go:930] validating driver "docker" against <nil>
	I1121 14:29:25.029071  255774 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:29:25.029977  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.123370  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.11156096 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.123696  255774 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:29:25.124078  255774 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:29:25.125758  255774 out.go:179] * Using Docker driver with root privileges
	I1121 14:29:25.127166  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.127249  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.127262  255774 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:29:25.127353  255774 start.go:353] cluster config:
	{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:25.129454  255774 out.go:179] * Starting "default-k8s-diff-port-376255" primary control-plane node in "default-k8s-diff-port-376255" cluster
	I1121 14:29:25.130961  255774 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:29:25.132637  255774 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:29:25.134190  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:25.134237  255774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 14:29:25.134251  255774 cache.go:65] Caching tarball of preloaded images
	I1121 14:29:25.134262  255774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:29:25.134379  255774 preload.go:238] Found /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1121 14:29:25.134391  255774 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:29:25.134520  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:25.134560  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json: {Name:mk1db0ba6952ac549a7eae06783e73916a7ad392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.161339  255774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:29:25.161363  255774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:29:25.161384  255774 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:29:25.161419  255774 start.go:360] acquireMachinesLock for default-k8s-diff-port-376255: {Name:mka18b3ecaec4bae205bc7951f90400738bef300 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:29:25.161518  255774 start.go:364] duration metric: took 79.824µs to acquireMachinesLock for "default-k8s-diff-port-376255"
	I1121 14:29:25.161561  255774 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:25.161653  255774 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:29:25.055066  249617 cli_runner.go:164] Run: docker network inspect old-k8s-version-012258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.085953  249617 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:25.093859  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.111432  249617 kubeadm.go:884] updating cluster {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:25.111671  249617 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:29:25.111753  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.143860  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.143888  249617 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:25.143953  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.174770  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.174789  249617 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:25.174797  249617 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1121 14:29:25.174897  249617 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-012258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:25.174970  249617 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:25.211311  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.211341  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.211371  249617 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:25.211401  249617 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-012258 NodeName:old-k8s-version-012258 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:25.211596  249617 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-012258"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:25.211673  249617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:29:25.224124  249617 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:25.224202  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:25.235430  249617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1121 14:29:25.254181  249617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:25.283842  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1121 14:29:25.302971  249617 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:25.309092  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.325170  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:25.438037  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:25.469767  249617 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258 for IP: 192.168.94.2
	I1121 14:29:25.469790  249617 certs.go:195] generating shared ca certs ...
	I1121 14:29:25.469811  249617 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.470023  249617 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:25.470095  249617 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:25.470105  249617 certs.go:257] generating profile certs ...
	I1121 14:29:25.470177  249617 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key
	I1121 14:29:25.470199  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt with IP's: []
	I1121 14:29:25.634340  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt ...
	I1121 14:29:25.634374  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt: {Name:mk5e1a3132436dad740351857d527e3c45fff4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648586  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key ...
	I1121 14:29:25.648625  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key: {Name:mk757010d91a13b26eb1340def496546bee9bf26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648791  249617 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc
	I1121 14:29:25.648816  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1121 14:29:25.817862  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc ...
	I1121 14:29:25.817892  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc: {Name:mk8a482343e99af6e8bdd7e52a6e5b813685beb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818099  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc ...
	I1121 14:29:25.818121  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc: {Name:mk4cf761e884b2a77e105e39ad6b0495b59b5aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818237  249617 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt
	I1121 14:29:25.818331  249617 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key
	I1121 14:29:25.818390  249617 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key
	I1121 14:29:25.818406  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt with IP's: []
	I1121 14:29:26.390351  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt ...
	I1121 14:29:26.390391  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt: {Name:mk37207f300780275f6aa5331fc436d60739196c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390599  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key ...
	I1121 14:29:26.390617  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key: {Name:mkff5d416178c38a50235608b783c3957bee8456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390849  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:26.390898  249617 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:26.390913  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:26.390946  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:26.390988  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:26.391029  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:26.391086  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:26.391817  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:26.418450  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:26.446063  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:26.469197  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:26.493823  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:29:26.526847  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:26.555176  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:25.915600  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:25.916118  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:25.916177  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:25.916228  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:25.948057  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:25.948080  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:25.948087  213058 cri.go:89] found id: ""
	I1121 14:29:25.948096  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:25.948160  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.952634  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.956801  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:25.956870  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:25.990988  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:25.991014  213058 cri.go:89] found id: ""
	I1121 14:29:25.991024  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:25.991083  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.995665  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:25.995736  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:26.031577  213058 cri.go:89] found id: ""
	I1121 14:29:26.031604  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.031612  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:26.031618  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:26.031665  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:26.064880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.064907  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.064912  213058 cri.go:89] found id: ""
	I1121 14:29:26.064922  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:26.064979  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.070274  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.075659  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:26.075731  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:26.108079  213058 cri.go:89] found id: ""
	I1121 14:29:26.108108  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.108118  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:26.108125  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:26.108181  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:26.138988  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:26.139018  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.139024  213058 cri.go:89] found id: ""
	I1121 14:29:26.139034  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:26.139096  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.143487  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.147564  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:26.147631  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:26.185747  213058 cri.go:89] found id: ""
	I1121 14:29:26.185774  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.185785  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:26.185793  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:26.185848  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:26.220265  213058 cri.go:89] found id: ""
	I1121 14:29:26.220296  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.220308  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:26.220321  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:26.220335  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.265042  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:26.265072  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:26.402636  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:26.402672  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:26.484531  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:26.484565  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:26.484581  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:26.534239  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:26.534294  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:26.579971  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:26.580016  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:26.643693  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:26.643727  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:26.683712  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:26.683748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:26.702800  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:26.702836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:26.741813  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:26.741845  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.812944  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:26.812997  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.855307  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:26.855347  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:24.308535  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1121 14:29:24.308619  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.317176  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1121 14:29:24.317245  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.318774  252125 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1121 14:29:24.318825  252125 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.318867  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.328208  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1121 14:29:24.328249  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1121 14:29:24.328291  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.328305  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.328664  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1121 14:29:24.328708  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1121 14:29:24.335839  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1121 14:29:24.335900  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.337631  252125 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1121 14:29:24.337672  252125 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.337713  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.346363  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.346443  252125 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1121 14:29:24.346484  252125 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.346517  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361284  252125 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1121 14:29:24.361331  252125 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.361375  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361424  252125 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1121 14:29:24.361445  252125 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.361477  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.366787  252125 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1121 14:29:24.366831  252125 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:29:24.366871  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379457  252125 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1121 14:29:24.379503  252125 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.379558  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379677  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.388569  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.388608  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.388658  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.388681  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.388574  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.418705  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.418763  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.427350  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.434639  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.434777  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.437430  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.437452  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.477986  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.478027  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.478099  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:29:24.478334  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:24.478136  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.485019  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.485026  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.489362  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.521124  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.521651  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:29:24.521767  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:24.553384  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:29:24.553425  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1121 14:29:24.553522  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:29:24.553632  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:24.553699  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1121 14:29:24.553755  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.553769  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:29:24.553803  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:29:24.553853  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:24.553860  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:24.553893  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:29:24.553920  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:29:24.553945  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:24.553945  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1121 14:29:24.565027  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:29:24.565077  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1121 14:29:24.565153  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:29:24.565169  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1121 14:29:24.574297  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:29:24.574338  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1121 14:29:24.574363  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:29:24.574390  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:29:24.574393  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1121 14:29:24.574407  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1121 14:29:24.784169  252125 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.784246  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.964305  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:29:25.029557  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.029626  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.445459  252125 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1121 14:29:25.445578  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691152  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.661495413s)
	I1121 14:29:26.691188  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:29:26.691209  252125 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691206  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.245604103s)
	I1121 14:29:26.691250  252125 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1121 14:29:26.691264  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691297  252125 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691347  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.696141  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.404441617s)
	I1121 14:29:28.100696  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.409327822s)
	I1121 14:29:28.100767  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:29:28.100803  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.100853  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.132780  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:25.163849  255774 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:29:25.164318  255774 start.go:159] libmachine.API.Create for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:25.164395  255774 client.go:173] LocalClient.Create starting
	I1121 14:29:25.164513  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem
	I1121 14:29:25.164575  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164605  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.164704  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem
	I1121 14:29:25.164760  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164776  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.165330  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:29:25.188513  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:29:25.188614  255774 network_create.go:284] running [docker network inspect default-k8s-diff-port-376255] to gather additional debugging logs...
	I1121 14:29:25.188640  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255
	W1121 14:29:25.213297  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 returned with exit code 1
	I1121 14:29:25.213338  255774 network_create.go:287] error running [docker network inspect default-k8s-diff-port-376255]: docker network inspect default-k8s-diff-port-376255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-376255 not found
	I1121 14:29:25.213435  255774 network_create.go:289] output of [docker network inspect default-k8s-diff-port-376255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-376255 not found
	
	** /stderr **
	I1121 14:29:25.213589  255774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.240844  255774 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
	I1121 14:29:25.241874  255774 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39921db0d513 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:e4:85:98:a5:e3} reservation:<nil>}
	I1121 14:29:25.242975  255774 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-36a8741c90a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:21:99:72:63:4a} reservation:<nil>}
	I1121 14:29:25.244042  255774 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-63d543fc8bbd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:58:40:d2:33:c4} reservation:<nil>}
	I1121 14:29:25.245269  255774 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb46e0}
	I1121 14:29:25.245303  255774 network_create.go:124] attempt to create docker network default-k8s-diff-port-376255 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:29:25.245384  255774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 default-k8s-diff-port-376255
	I1121 14:29:25.322210  255774 network_create.go:108] docker network default-k8s-diff-port-376255 192.168.85.0/24 created
	I1121 14:29:25.322244  255774 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-376255" container
	I1121 14:29:25.322309  255774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:29:25.346732  255774 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-376255 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:29:25.374919  255774 oci.go:103] Successfully created a docker volume default-k8s-diff-port-376255
	I1121 14:29:25.374994  255774 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-376255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --entrypoint /usr/bin/test -v default-k8s-diff-port-376255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:29:26.343288  255774 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-376255
	I1121 14:29:26.343370  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:26.343387  255774 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:29:26.343457  255774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:29:26.582319  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:26.606403  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:26.635408  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:26.661287  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:26.686582  249617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:26.703157  249617 ssh_runner.go:195] Run: openssl version
	I1121 14:29:26.712353  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:26.725593  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732381  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732523  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.774823  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:26.785127  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:26.796035  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800685  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800751  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.842185  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:26.852632  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:26.863838  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869571  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869642  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.922017  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:26.934065  249617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:26.939457  249617 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:26.939526  249617 kubeadm.go:401] StartCluster: {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:26.939648  249617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:26.939710  249617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:26.978114  249617 cri.go:89] found id: ""
	I1121 14:29:26.978192  249617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:26.989363  249617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:27.000529  249617 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:27.000603  249617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:27.012158  249617 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:27.012179  249617 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:27.012231  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:27.022084  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:27.022141  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:27.034139  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:27.044897  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:27.045038  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:27.056593  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.066532  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:27.066615  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.077925  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:27.088254  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:27.088320  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:27.098442  249617 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:27.205509  249617 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:27.290009  249617 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:29.388121  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:29.388594  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:29.388645  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:29.388690  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:29.416964  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:29.416991  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.416996  213058 cri.go:89] found id: ""
	I1121 14:29:29.417006  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:29.417074  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.421476  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.425483  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:29.425557  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:29.453687  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:29.453708  213058 cri.go:89] found id: ""
	I1121 14:29:29.453718  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:29.453783  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.458267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:29.458353  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:29.485804  213058 cri.go:89] found id: ""
	I1121 14:29:29.485865  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.485876  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:29.485883  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:29.485940  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:29.514265  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:29.514290  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.514294  213058 cri.go:89] found id: ""
	I1121 14:29:29.514302  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:29.514349  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.518626  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.522446  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:29.522501  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:29.549770  213058 cri.go:89] found id: ""
	I1121 14:29:29.549799  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.549811  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:29.549819  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:29.549868  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:29.577193  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.577217  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.577222  213058 cri.go:89] found id: ""
	I1121 14:29:29.577230  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:29.577288  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.581256  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.585291  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:29.585347  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:29.614632  213058 cri.go:89] found id: ""
	I1121 14:29:29.614664  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.614674  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:29.614682  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:29.614740  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:29.645697  213058 cri.go:89] found id: ""
	I1121 14:29:29.645721  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.645730  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:29.645741  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:29.645756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.675578  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:29.675607  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:29.718952  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:29.718990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:29.750089  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:29.750117  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:29.858708  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:29.858738  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.902976  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:29.903013  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.938083  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:29.938118  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.976329  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:29.976366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:29.991448  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:29.991485  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:30.053990  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:30.054015  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:30.054032  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:30.089042  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:30.089076  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:30.124498  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:30.124528  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.685601  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:32.686035  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:32.686089  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:32.686144  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:32.744948  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:32.745095  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:32.745132  213058 cri.go:89] found id: ""
	I1121 14:29:32.745169  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:32.745355  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.752020  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.760837  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:32.761106  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:32.807418  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:32.807451  213058 cri.go:89] found id: ""
	I1121 14:29:32.807462  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:32.807521  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.813216  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:32.813289  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:32.852598  213058 cri.go:89] found id: ""
	I1121 14:29:32.852633  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.852645  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:32.852653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:32.852711  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:32.889120  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.889144  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:32.889148  213058 cri.go:89] found id: ""
	I1121 14:29:32.889157  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:32.889211  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.894834  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.900572  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:32.900646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:32.937810  213058 cri.go:89] found id: ""
	I1121 14:29:32.937836  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.937846  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:32.937853  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:32.937914  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:32.975713  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:32.975735  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:32.975741  213058 cri.go:89] found id: ""
	I1121 14:29:32.975751  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:32.975815  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.981574  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.985965  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:32.986030  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:33.019894  213058 cri.go:89] found id: ""
	I1121 14:29:33.019923  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.019935  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:33.019949  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:33.020009  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:33.051872  213058 cri.go:89] found id: ""
	I1121 14:29:33.051901  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.051911  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:33.051923  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:33.051937  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:33.103114  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:33.103153  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:33.142816  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:33.142846  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:33.209677  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:33.209736  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:33.255185  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:33.255220  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:33.272562  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:33.272600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:33.319098  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:33.319132  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:33.366245  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:33.366286  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:33.410624  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:33.410660  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:33.458217  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:33.458253  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:33.586879  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:33.586919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:29.835800  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.734910291s)
	I1121 14:29:29.835838  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:29:29.835860  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835902  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835802  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.702989246s)
	I1121 14:29:29.835965  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:29:29.836056  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:29.840842  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:29:29.840873  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1121 14:29:32.866902  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (3.030968163s)
	I1121 14:29:32.866941  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:29:32.866961  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:32.867002  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:31.901829  255774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.558304176s)
	I1121 14:29:31.901864  255774 kic.go:203] duration metric: took 5.558473353s to extract preloaded images to volume ...
	W1121 14:29:31.901941  255774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:29:31.901969  255774 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:29:31.902010  255774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:29:31.985847  255774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-376255 --name default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --network default-k8s-diff-port-376255 --ip 192.168.85.2 --volume default-k8s-diff-port-376255:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:29:32.403824  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Running}}
	I1121 14:29:32.427802  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.456228  255774 cli_runner.go:164] Run: docker exec default-k8s-diff-port-376255 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:29:32.514766  255774 oci.go:144] the created container "default-k8s-diff-port-376255" has a running status.
	I1121 14:29:32.514799  255774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa...
	I1121 14:29:32.829505  255774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:29:32.861911  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.888316  255774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:29:32.888342  255774 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-376255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:29:32.948121  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.975355  255774 machine.go:94] provisionDockerMachine start ...
	I1121 14:29:32.975799  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:33.002463  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:33.002813  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:33.002834  255774 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:29:33.003677  255774 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37682->127.0.0.1:33070: read: connection reset by peer
	I1121 14:29:37.228254  249617 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:29:37.228434  249617 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:37.228644  249617 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:37.228822  249617 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:37.228907  249617 kubeadm.go:319] OS: Linux
	I1121 14:29:37.228971  249617 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:37.229029  249617 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:37.229111  249617 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:37.229198  249617 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:37.229264  249617 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:37.229333  249617 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:37.229403  249617 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:37.229468  249617 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:37.229624  249617 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:37.229762  249617 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:37.229892  249617 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:29:37.230051  249617 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.235113  249617 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:37.235306  249617 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:37.235508  249617 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:37.235691  249617 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:37.235858  249617 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:37.236102  249617 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:37.236205  249617 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:37.236303  249617 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:37.236516  249617 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236607  249617 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:37.236765  249617 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236861  249617 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:37.236954  249617 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:37.237021  249617 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:37.237104  249617 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:37.237178  249617 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:37.237257  249617 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:37.237352  249617 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:37.237438  249617 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:37.237554  249617 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:37.237649  249617 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:37.239227  249617 out.go:252]   - Booting up control plane ...
	I1121 14:29:37.239369  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:37.239534  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:37.239682  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:37.239829  249617 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:37.239965  249617 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:37.240022  249617 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:37.240260  249617 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:29:37.240373  249617 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.503152 seconds
	I1121 14:29:37.240759  249617 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:37.240933  249617 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:37.241035  249617 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:37.241286  249617 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-012258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:37.241409  249617 kubeadm.go:319] [bootstrap-token] Using token: yix385.n0xejrlt7sdx1ngs
	I1121 14:29:37.243198  249617 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:37.243379  249617 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:37.243497  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:37.243755  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:37.243946  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:37.244147  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:37.244287  249617 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:37.244477  249617 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:37.244564  249617 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:37.244632  249617 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:37.244642  249617 kubeadm.go:319] 
	I1121 14:29:37.244725  249617 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:37.244736  249617 kubeadm.go:319] 
	I1121 14:29:37.244834  249617 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:37.244845  249617 kubeadm.go:319] 
	I1121 14:29:37.244877  249617 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:37.244966  249617 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:37.245033  249617 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:37.245045  249617 kubeadm.go:319] 
	I1121 14:29:37.245111  249617 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:37.245120  249617 kubeadm.go:319] 
	I1121 14:29:37.245178  249617 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:37.245192  249617 kubeadm.go:319] 
	I1121 14:29:37.245274  249617 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:37.245371  249617 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:37.245468  249617 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:37.245476  249617 kubeadm.go:319] 
	I1121 14:29:37.245604  249617 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:37.245734  249617 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:37.245755  249617 kubeadm.go:319] 
	I1121 14:29:37.245866  249617 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246024  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:37.246062  249617 kubeadm.go:319] 	--control-plane 
	I1121 14:29:37.246072  249617 kubeadm.go:319] 
	I1121 14:29:37.246178  249617 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:37.246189  249617 kubeadm.go:319] 
	I1121 14:29:37.246294  249617 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246443  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:37.246454  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.246462  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.248274  249617 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:36.147516  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.147569  255774 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-376255"
	I1121 14:29:36.147633  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.169609  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.169898  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.169928  255774 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376255 && echo "default-k8s-diff-port-376255" | sudo tee /etc/hostname
	I1121 14:29:36.328958  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.329040  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.353105  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.353414  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.353448  255774 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376255/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:36.504067  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:36.504097  255774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:36.504119  255774 ubuntu.go:190] setting up certificates
	I1121 14:29:36.504133  255774 provision.go:84] configureAuth start
	I1121 14:29:36.504206  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:36.528674  255774 provision.go:143] copyHostCerts
	I1121 14:29:36.528752  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:36.528762  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:36.528840  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:36.528968  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:36.528997  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:36.529043  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:36.529141  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:36.529152  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:36.529188  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:36.529281  255774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376255 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-376255 localhost minikube]
	I1121 14:29:36.617208  255774 provision.go:177] copyRemoteCerts
	I1121 14:29:36.617283  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:36.617345  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.639948  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.749486  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:36.777360  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 14:29:36.804875  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:29:36.830920  255774 provision.go:87] duration metric: took 326.762892ms to configureAuth
	I1121 14:29:36.830953  255774 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:36.831165  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:36.831181  255774 machine.go:97] duration metric: took 3.855604158s to provisionDockerMachine
	I1121 14:29:36.831191  255774 client.go:176] duration metric: took 11.666782197s to LocalClient.Create
	I1121 14:29:36.831216  255774 start.go:167] duration metric: took 11.666902979s to libmachine.API.Create "default-k8s-diff-port-376255"
	I1121 14:29:36.831234  255774 start.go:293] postStartSetup for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:36.831254  255774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:36.831311  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:36.831360  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.855811  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.969760  255774 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:36.974452  255774 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:36.974529  255774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:36.974577  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:36.974658  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:36.974771  255774 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:36.974903  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:36.984975  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:37.017462  255774 start.go:296] duration metric: took 186.210262ms for postStartSetup
	I1121 14:29:37.017947  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.041309  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:37.041659  255774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:37.041731  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.070697  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.177189  255774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:37.185711  255774 start.go:128] duration metric: took 12.024042461s to createHost
	I1121 14:29:37.185741  255774 start.go:83] releasing machines lock for "default-k8s-diff-port-376255", held for 12.024206528s
	I1121 14:29:37.185820  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.211853  255774 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:37.211903  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.211965  255774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:37.212033  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.238575  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.242252  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.421321  255774 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:37.431728  255774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:37.437939  255774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:37.438053  255774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:37.469409  255774 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:37.469437  255774 start.go:496] detecting cgroup driver to use...
	I1121 14:29:37.469471  255774 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:37.469521  255774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:37.490669  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:37.507754  255774 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:37.507821  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:37.525644  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:37.545289  255774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:37.674060  255774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:37.795128  255774 docker.go:234] disabling docker service ...
	I1121 14:29:37.795198  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:37.819043  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:37.834819  255774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:37.960408  255774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:38.072269  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:38.089314  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:38.105248  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:29:38.117445  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:38.128509  255774 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:38.128607  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:38.139526  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.150896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:38.161459  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.173179  255774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:38.183645  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:38.194923  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:38.207896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:38.220346  255774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:38.230823  255774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:38.241807  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.339708  255774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:38.460319  255774 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:38.460387  255774 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:38.465812  255774 start.go:564] Will wait 60s for crictl version
	I1121 14:29:38.465875  255774 ssh_runner.go:195] Run: which crictl
	I1121 14:29:38.470166  255774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:38.507773  255774 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:38.507860  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.532247  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.559098  255774 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1121 14:29:33.655577  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:33.655599  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:33.655612  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.225853  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:36.226247  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:36.226304  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:36.226364  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:36.259583  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:36.259613  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.259619  213058 cri.go:89] found id: ""
	I1121 14:29:36.259628  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:36.259690  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.264798  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.269597  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:36.269663  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:36.304312  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:36.304335  213058 cri.go:89] found id: ""
	I1121 14:29:36.304346  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:36.304403  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.309760  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:36.309833  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:36.342617  213058 cri.go:89] found id: ""
	I1121 14:29:36.342643  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.342653  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:36.342660  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:36.342722  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:36.378880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.378909  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:36.378914  213058 cri.go:89] found id: ""
	I1121 14:29:36.378924  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:36.378996  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.384032  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.388866  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:36.388932  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:36.427253  213058 cri.go:89] found id: ""
	I1121 14:29:36.427282  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.427293  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:36.427300  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:36.427355  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:36.461581  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:36.461604  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:36.461609  213058 cri.go:89] found id: ""
	I1121 14:29:36.461618  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:36.461677  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.466623  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.471422  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:36.471490  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:36.503502  213058 cri.go:89] found id: ""
	I1121 14:29:36.503533  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.503566  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:36.503575  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:36.503633  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:36.538350  213058 cri.go:89] found id: ""
	I1121 14:29:36.538379  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.538390  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:36.538404  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:36.538419  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:36.666987  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:36.667025  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:36.685628  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:36.685659  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:36.763464  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:36.763491  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:36.763508  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.808789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:36.808832  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.887558  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:36.887596  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:36.952391  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:36.952434  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:36.993139  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:36.993167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:37.037499  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:37.037552  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:37.084237  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:37.084270  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:37.132236  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:37.132272  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:37.172720  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:37.172753  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:34.341753  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.474720913s)
	I1121 14:29:34.341781  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:29:34.341812  252125 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:34.341855  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:37.308520  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.966633628s)
	I1121 14:29:37.308585  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:29:37.308616  252125 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.308666  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.772300  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:29:37.772349  252125 cache_images.go:125] Successfully loaded all cached images
	I1121 14:29:37.772358  252125 cache_images.go:94] duration metric: took 13.627858156s to LoadCachedImages
	I1121 14:29:37.772375  252125 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1121 14:29:37.772522  252125 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-921956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:37.772622  252125 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:37.802988  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.803017  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.803041  252125 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:37.803067  252125 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-921956 NodeName:no-preload-921956 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:37.803212  252125 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-921956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:37.803298  252125 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.814189  252125 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:29:37.814255  252125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.824124  252125 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1121 14:29:37.824214  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1121 14:29:37.824231  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:29:37.824217  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1121 14:29:37.829417  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:29:37.829466  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1121 14:29:38.860713  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:29:38.875498  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:29:38.880447  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:29:38.880477  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1121 14:29:39.014274  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:29:39.021151  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:29:39.021187  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1121 14:29:39.234010  252125 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:39.244382  252125 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1121 14:29:39.259897  252125 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:39.279143  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1121 14:29:38.560688  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:38.580956  255774 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:38.585728  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.599140  255774 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:38.599295  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:38.599391  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.631637  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.631660  255774 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:38.631720  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.665498  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.665522  255774 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:38.665530  255774 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1121 14:29:38.665659  255774 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:38.665752  255774 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:38.694106  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:38.694138  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:38.694156  255774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:38.694182  255774 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376255 NodeName:default-k8s-diff-port-376255 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:38.694318  255774 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-376255"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:38.694377  255774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:38.704016  255774 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:38.704074  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:38.712471  255774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1121 14:29:38.726311  255774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:38.743589  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1121 14:29:38.759275  255774 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:38.763723  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.775814  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.870850  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:38.898876  255774 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255 for IP: 192.168.85.2
	I1121 14:29:38.898898  255774 certs.go:195] generating shared ca certs ...
	I1121 14:29:38.898917  255774 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:38.899068  255774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:38.899116  255774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:38.899130  255774 certs.go:257] generating profile certs ...
	I1121 14:29:38.899196  255774 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key
	I1121 14:29:38.899223  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt with IP's: []
	I1121 14:29:39.101636  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt ...
	I1121 14:29:39.101669  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: {Name:mk48f410a390b01d5b10a9357a2648374ae8306b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.101873  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key ...
	I1121 14:29:39.101885  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key: {Name:mkb89c45215e08640f5b5fa9a6de6863ea0983e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.102008  255774 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066
	I1121 14:29:39.102024  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:29:39.438352  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 ...
	I1121 14:29:39.438387  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066: {Name:mkc5f7dc938a9541dec0c2accd850515b39a25d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438574  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 ...
	I1121 14:29:39.438586  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066: {Name:mka67f2d91e35acd02a0ed4174188db6877ef796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438666  255774 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt
	I1121 14:29:39.438744  255774 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key
	I1121 14:29:39.438811  255774 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key
	I1121 14:29:39.438826  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt with IP's: []
	I1121 14:29:39.523793  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt ...
	I1121 14:29:39.523827  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt: {Name:mk2418751bb08ae4f2cae2628ba430b2e731f823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524011  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key ...
	I1121 14:29:39.524031  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key: {Name:mk12031f310020bd38886fd870544563c6ab1faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524255  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:39.524307  255774 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:39.524323  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:39.524353  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:39.524383  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:39.524407  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:39.524445  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:39.525071  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:39.546065  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:39.565880  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:39.585450  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:39.604394  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 14:29:39.623736  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:39.642460  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:39.661463  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:39.681314  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:39.879137  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:39.899730  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:39.918630  255774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:39.935942  255774 ssh_runner.go:195] Run: openssl version
	I1121 14:29:39.943062  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.020861  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026152  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026209  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.067681  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.077051  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.087944  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092369  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092434  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.132125  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.142255  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.152828  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157171  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157265  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.199881  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.210053  255774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.214456  255774 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.214524  255774 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.214625  255774 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.214692  255774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.249359  255774 cri.go:89] found id: ""
	I1121 14:29:40.249429  255774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.259121  255774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.270847  255774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.270910  255774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.283266  255774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.283287  255774 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.283341  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1121 14:29:40.293676  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.293725  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.303277  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.313015  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.313073  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.322086  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.330920  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.331015  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.339376  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.347984  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.348046  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.356683  255774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.404354  255774 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.404455  255774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.435448  255774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.435583  255774 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.435628  255774 kubeadm.go:319] OS: Linux
	I1121 14:29:40.435689  255774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.435827  255774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.435905  255774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.436039  255774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.436108  255774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.436176  255774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.436276  255774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.436351  255774 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.508224  255774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.508370  255774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.508531  255774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.513996  255774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:39.295828  252125 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:39.301164  252125 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:39.312709  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:39.400897  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:39.429294  252125 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956 for IP: 192.168.103.2
	I1121 14:29:39.429315  252125 certs.go:195] generating shared ca certs ...
	I1121 14:29:39.429332  252125 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.429485  252125 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:39.429583  252125 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:39.429600  252125 certs.go:257] generating profile certs ...
	I1121 14:29:39.429678  252125 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key
	I1121 14:29:39.429693  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt with IP's: []
	I1121 14:29:39.556088  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt ...
	I1121 14:29:39.556115  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: {Name:mkc697edce2d4ccb5a4a2ccbe74255aef4a205c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556297  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key ...
	I1121 14:29:39.556312  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key: {Name:mkad7b167b883af61314c3f8b6c71358edc782dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556419  252125 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d
	I1121 14:29:39.556435  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1121 14:29:39.871499  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d ...
	I1121 14:29:39.871529  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d: {Name:mkc839b1c936af809ed1159ef4599336fd260d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871726  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d ...
	I1121 14:29:39.871748  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d: {Name:mkc2f0abcac84f6547f3e0edb165e90b14fdd7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871882  252125 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt
	I1121 14:29:39.871997  252125 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key
	I1121 14:29:39.872096  252125 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key
	I1121 14:29:39.872120  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt with IP's: []
	I1121 14:29:40.083173  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt ...
	I1121 14:29:40.083201  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt: {Name:mkba7efd029f616230e0b3cf14c4f32abac0549e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083385  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key ...
	I1121 14:29:40.083414  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key: {Name:mk24f6fbb57f5dfce4a401be193e0a832a6ccf6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083661  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:40.083700  252125 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:40.083711  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:40.083749  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:40.083780  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:40.083827  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:40.083887  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:40.084653  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:40.106430  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:40.126520  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:40.148412  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:40.169973  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:29:40.191493  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:29:40.214458  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:40.234692  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:29:40.261986  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:40.352437  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:40.372804  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:40.394700  252125 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:40.411183  252125 ssh_runner.go:195] Run: openssl version
	I1121 14:29:40.419607  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.431060  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436371  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436429  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.481320  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.492797  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.502878  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507432  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507499  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.567779  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.577673  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.587826  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592472  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592528  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.627626  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.637464  252125 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.641884  252125 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.641943  252125 kubeadm.go:401] StartCluster: {Name:no-preload-921956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.642030  252125 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.642085  252125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.673351  252125 cri.go:89] found id: ""
	I1121 14:29:40.673423  252125 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.682715  252125 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.691493  252125 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.691581  252125 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.700143  252125 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.700160  252125 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.700205  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:40.708734  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.708799  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.717135  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.726191  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.726262  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.734074  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.742647  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.742709  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.751091  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.759770  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.759841  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.768253  252125 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.810825  252125 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.810892  252125 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.831836  252125 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.831940  252125 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.832026  252125 kubeadm.go:319] OS: Linux
	I1121 14:29:40.832115  252125 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.832212  252125 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.832286  252125 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.832358  252125 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.832432  252125 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.832504  252125 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.832668  252125 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.832735  252125 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.895341  252125 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.895491  252125 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.895637  252125 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.901358  252125 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.249631  249617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:37.262987  249617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:29:37.263020  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:37.283444  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:38.138719  249617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:38.138808  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.138810  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-012258 minikube.k8s.io/updated_at=2025_11_21T14_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-012258 minikube.k8s.io/primary=true
	I1121 14:29:38.150782  249617 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:38.225220  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.726231  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.225533  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.725591  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.225601  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.725734  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:41.226112  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.521190  255774 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.521325  255774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.521431  255774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.003970  255774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.240665  255774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.425685  255774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:41.689428  255774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:41.923373  255774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:41.923563  255774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.051973  255774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.052979  255774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.277531  255774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:42.491572  255774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:42.605458  255774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:42.605535  255774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:42.870659  255774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:43.039072  255774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:43.228611  255774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:43.489903  255774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:43.563271  255774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:43.563948  255774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:43.568453  255774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:39.727688  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:39.728083  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:39.728134  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:39.728197  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:39.758413  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:39.758436  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:39.758441  213058 cri.go:89] found id: ""
	I1121 14:29:39.758452  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:39.758508  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.763439  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.767912  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:39.767980  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:39.802923  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:39.802948  213058 cri.go:89] found id: ""
	I1121 14:29:39.802957  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:39.803013  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.807778  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:39.807853  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:39.835286  213058 cri.go:89] found id: ""
	I1121 14:29:39.835314  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.835335  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:39.835343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:39.835408  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:39.864986  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:39.865034  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:39.865040  213058 cri.go:89] found id: ""
	I1121 14:29:39.865050  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:39.865105  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.869441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.873676  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:39.873739  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:39.902671  213058 cri.go:89] found id: ""
	I1121 14:29:39.902698  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.902707  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:39.902715  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:39.902762  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:39.933452  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:39.933477  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:39.933483  213058 cri.go:89] found id: ""
	I1121 14:29:39.933492  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:39.933557  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.938051  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.942029  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:39.942094  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:39.969991  213058 cri.go:89] found id: ""
	I1121 14:29:39.970018  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.970028  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:39.970036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:39.970086  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:39.997381  213058 cri.go:89] found id: ""
	I1121 14:29:39.997406  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.997417  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:39.997429  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:39.997443  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:40.027188  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:40.027213  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:40.067878  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:40.067906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:40.101358  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:40.101388  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:40.115674  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:40.115704  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:40.153845  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:40.153871  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:40.188913  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:40.188944  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:40.244995  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:40.245033  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:40.351506  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:40.351558  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:40.417221  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:40.417244  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:40.417263  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:40.457789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:40.457836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.520712  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:40.520748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.056648  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:43.057094  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:43.057150  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:43.057204  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:43.085236  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.085260  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.085265  213058 cri.go:89] found id: ""
	I1121 14:29:43.085275  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:43.085333  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.089868  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.094074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:43.094134  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:43.122420  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.122447  213058 cri.go:89] found id: ""
	I1121 14:29:43.122457  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:43.122512  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.126830  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:43.126892  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:43.156518  213058 cri.go:89] found id: ""
	I1121 14:29:43.156566  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.156577  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:43.156584  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:43.156646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:43.185212  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:43.185233  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.185238  213058 cri.go:89] found id: ""
	I1121 14:29:43.185277  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:43.185338  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.190000  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.194074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:43.194131  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:43.224175  213058 cri.go:89] found id: ""
	I1121 14:29:43.224201  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.224211  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:43.224218  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:43.224277  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:43.258260  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:43.258292  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.258299  213058 cri.go:89] found id: ""
	I1121 14:29:43.258310  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:43.258378  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.263276  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.268195  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:43.268264  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:43.303269  213058 cri.go:89] found id: ""
	I1121 14:29:43.303300  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.303311  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:43.303319  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:43.303379  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:43.333956  213058 cri.go:89] found id: ""
	I1121 14:29:43.333985  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.333995  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:43.334007  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:43.334021  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:43.366338  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:43.366369  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:43.458987  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:43.459027  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.497960  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:43.497995  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.539997  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:43.540035  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.575882  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:43.575911  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.903405  252125 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.903502  252125 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.903630  252125 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.180390  252125 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.211121  252125 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.523007  252125 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:42.461521  252125 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:42.641495  252125 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:42.641701  252125 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.773640  252125 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.773843  252125 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.921369  252125 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:43.256203  252125 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:43.834470  252125 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:43.834645  252125 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:43.949422  252125 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:44.093777  252125 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:44.227287  252125 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:44.509482  252125 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:44.696294  252125 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:44.696767  252125 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:44.705846  252125 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:43.573374  255774 out.go:252]   - Booting up control plane ...
	I1121 14:29:43.573510  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:43.573669  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:43.573781  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:43.590344  255774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:43.590494  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:43.599838  255774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:43.600184  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:43.600247  255774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:43.720721  255774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:43.720878  255774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:44.721899  255774 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001196965s
	I1121 14:29:44.724830  255774 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:44.724972  255774 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1121 14:29:44.725131  255774 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:44.725253  255774 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:41.726266  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.225460  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.725727  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.225740  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.725669  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.225350  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.725651  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.226025  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.725289  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:46.226316  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.632243  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:43.632278  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.681909  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:43.681959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.723402  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:43.723454  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:43.776606  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:43.776641  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:43.793171  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:43.793200  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:43.854264  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:43.854293  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:43.854308  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.383659  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:46.384075  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:46.384128  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:46.384191  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:46.441629  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.441734  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:46.441754  213058 cri.go:89] found id: ""
	I1121 14:29:46.441776  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:46.441873  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.447714  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.453337  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:46.453422  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:46.497451  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.497475  213058 cri.go:89] found id: ""
	I1121 14:29:46.497485  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:46.497585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.504731  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:46.504801  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:46.562972  213058 cri.go:89] found id: ""
	I1121 14:29:46.563014  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.563027  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:46.563036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:46.563287  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:46.611186  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:46.611216  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:46.611221  213058 cri.go:89] found id: ""
	I1121 14:29:46.611231  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:46.611289  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.620404  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.626388  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:46.626559  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:46.674192  213058 cri.go:89] found id: ""
	I1121 14:29:46.674247  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.674259  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:46.674267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:46.674448  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:46.749738  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.749765  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:46.749771  213058 cri.go:89] found id: ""
	I1121 14:29:46.749780  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:46.749835  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.756273  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.763986  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:46.764120  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:46.811858  213058 cri.go:89] found id: ""
	I1121 14:29:46.811883  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.811901  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:46.811909  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:46.811963  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:46.849599  213058 cri.go:89] found id: ""
	I1121 14:29:46.849645  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.849655  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:46.849666  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:46.849683  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.913988  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:46.914024  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.953189  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:46.953227  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:47.001663  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:47.001705  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:47.041106  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:47.041137  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:47.107673  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:47.107712  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:47.240432  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:47.240473  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:47.288852  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:47.288894  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:46.531314  255774 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.80645272s
	I1121 14:29:47.509316  255774 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.784421033s
	I1121 14:29:49.226647  255774 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501794549s
	I1121 14:29:49.239409  255774 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:49.252719  255774 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:49.264076  255774 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:49.264371  255774 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-376255 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:49.274799  255774 kubeadm.go:319] [bootstrap-token] Using token: 8nwcfl.9utqukqcvuro6a4p
	I1121 14:29:44.769338  252125 out.go:252]   - Booting up control plane ...
	I1121 14:29:44.769476  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:44.769652  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:44.769771  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:44.769940  252125 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:44.770087  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:44.778391  252125 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:44.779655  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:44.779729  252125 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:44.894196  252125 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:44.894364  252125 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:45.895053  252125 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000974959s
	I1121 14:29:45.898754  252125 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:45.898875  252125 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1121 14:29:45.899003  252125 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:45.899149  252125 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:48.621169  252125 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.722350043s
	I1121 14:29:49.059709  252125 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.160801257s
	I1121 14:29:49.276414  255774 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:49.276590  255774 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:49.280532  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:49.287374  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:49.290401  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:49.293308  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:49.297552  255774 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:49.632747  255774 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:46.726037  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.228665  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.725338  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.226199  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.725959  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.225812  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.725337  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.225293  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.310282  249617 kubeadm.go:1114] duration metric: took 12.17154172s to wait for elevateKubeSystemPrivileges
	I1121 14:29:50.310322  249617 kubeadm.go:403] duration metric: took 23.370802852s to StartCluster
	I1121 14:29:50.310347  249617 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.310438  249617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:50.311864  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.312167  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:50.312169  249617 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:50.312267  249617 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:50.312352  249617 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312372  249617 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-012258"
	I1121 14:29:50.312403  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.312458  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:50.312516  249617 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312530  249617 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-012258"
	I1121 14:29:50.312827  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.312965  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.314603  249617 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:50.316238  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:50.339724  249617 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:50.056893  255774 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:50.634602  255774 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:50.635720  255774 kubeadm.go:319] 
	I1121 14:29:50.635840  255774 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:50.635916  255774 kubeadm.go:319] 
	I1121 14:29:50.636085  255774 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:50.636139  255774 kubeadm.go:319] 
	I1121 14:29:50.636189  255774 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:50.636300  255774 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:50.636386  255774 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:50.636448  255774 kubeadm.go:319] 
	I1121 14:29:50.636574  255774 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:50.636584  255774 kubeadm.go:319] 
	I1121 14:29:50.636647  255774 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:50.636652  255774 kubeadm.go:319] 
	I1121 14:29:50.636709  255774 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:50.636796  255774 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:50.636878  255774 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:50.636886  255774 kubeadm.go:319] 
	I1121 14:29:50.636981  255774 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:50.637083  255774 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:50.637090  255774 kubeadm.go:319] 
	I1121 14:29:50.637247  255774 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637414  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:50.637449  255774 kubeadm.go:319] 	--control-plane 
	I1121 14:29:50.637460  255774 kubeadm.go:319] 
	I1121 14:29:50.637571  255774 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:50.637580  255774 kubeadm.go:319] 
	I1121 14:29:50.637672  255774 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637785  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:50.642202  255774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:50.642513  255774 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:50.642647  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:50.642693  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:50.645524  255774 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:50.339929  249617 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-012258"
	I1121 14:29:50.339977  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.340433  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.341133  249617 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.341154  249617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:50.341208  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.377822  249617 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.377846  249617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:50.377844  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.377907  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.410483  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.415901  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:50.468678  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:50.503643  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.536480  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.667362  249617 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:50.668484  249617 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:29:50.954598  249617 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:50.401999  252125 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502477764s
	I1121 14:29:50.419850  252125 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:50.933016  252125 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:50.948821  252125 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:50.949093  252125 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-921956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:50.961417  252125 kubeadm.go:319] [bootstrap-token] Using token: uhuim0.7wh8hbt7v76eo7qs
	I1121 14:29:50.955828  249617 addons.go:530] duration metric: took 643.55365ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:51.174831  249617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-012258" context rescaled to 1 replicas
	I1121 14:29:50.963415  252125 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:50.963588  252125 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:50.971176  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:50.980644  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:50.985255  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:50.989946  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:50.994015  252125 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:51.128309  252125 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:51.550178  252125 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:52.128624  252125 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:52.129402  252125 kubeadm.go:319] 
	I1121 14:29:52.129496  252125 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:52.129528  252125 kubeadm.go:319] 
	I1121 14:29:52.129657  252125 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:52.129669  252125 kubeadm.go:319] 
	I1121 14:29:52.129705  252125 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:52.129798  252125 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:52.129906  252125 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:52.129923  252125 kubeadm.go:319] 
	I1121 14:29:52.129995  252125 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:52.130004  252125 kubeadm.go:319] 
	I1121 14:29:52.130078  252125 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:52.130087  252125 kubeadm.go:319] 
	I1121 14:29:52.130170  252125 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:52.130304  252125 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:52.130418  252125 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:52.130446  252125 kubeadm.go:319] 
	I1121 14:29:52.130574  252125 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:52.130677  252125 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:52.130685  252125 kubeadm.go:319] 
	I1121 14:29:52.130797  252125 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.130966  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:52.131000  252125 kubeadm.go:319] 	--control-plane 
	I1121 14:29:52.131035  252125 kubeadm.go:319] 
	I1121 14:29:52.131212  252125 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:52.131230  252125 kubeadm.go:319] 
	I1121 14:29:52.131343  252125 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.131485  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:52.132830  252125 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:52.132967  252125 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:52.133003  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:52.133014  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:52.134968  252125 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:52.136241  252125 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:52.141107  252125 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:52.141131  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:52.155585  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:52.395340  252125 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:52.395422  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.395526  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-921956 minikube.k8s.io/updated_at=2025_11_21T14_29_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-921956 minikube.k8s.io/primary=true
	I1121 14:29:52.481012  252125 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:52.481125  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.982198  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.481748  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.981282  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.646815  255774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:50.654615  255774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:50.654642  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:50.673887  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:50.944978  255774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:50.945143  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.945309  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-376255 minikube.k8s.io/updated_at=2025_11_21T14_29_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=default-k8s-diff-port-376255 minikube.k8s.io/primary=true
	I1121 14:29:50.960009  255774 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:51.036596  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:51.537134  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.037345  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.536941  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.037592  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.536966  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.036678  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.536697  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.037499  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.536808  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.610391  255774 kubeadm.go:1114] duration metric: took 4.665295307s to wait for elevateKubeSystemPrivileges
	I1121 14:29:55.610426  255774 kubeadm.go:403] duration metric: took 15.395907943s to StartCluster
	I1121 14:29:55.610448  255774 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.610511  255774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:55.612071  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.612346  255774 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:55.612498  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:55.612612  255774 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:55.612696  255774 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612713  255774 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.612745  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.612775  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:55.612835  255774 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612852  255774 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376255"
	I1121 14:29:55.613218  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613392  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613476  255774 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:55.615420  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:55.641842  255774 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.641893  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.642317  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.647007  255774 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:55.648771  255774 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.648807  255774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:55.648882  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.679690  255774 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.679713  255774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:55.679780  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.680868  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.703091  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.713751  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:55.781953  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:55.795189  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.811872  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.895061  255774 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:55.896386  255774 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:29:56.162438  255774 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:29:52.672645  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:55.172665  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:29:54.481750  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.981303  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.481778  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.981846  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.481336  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.981822  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:57.056720  252125 kubeadm.go:1114] duration metric: took 4.66135199s to wait for elevateKubeSystemPrivileges
	I1121 14:29:57.056760  252125 kubeadm.go:403] duration metric: took 16.414821557s to StartCluster
	I1121 14:29:57.056783  252125 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.056866  252125 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:57.059279  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.059591  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:57.059595  252125 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:57.059668  252125 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:57.059755  252125 addons.go:70] Setting storage-provisioner=true in profile "no-preload-921956"
	I1121 14:29:57.059780  252125 addons.go:239] Setting addon storage-provisioner=true in "no-preload-921956"
	I1121 14:29:57.059783  252125 addons.go:70] Setting default-storageclass=true in profile "no-preload-921956"
	I1121 14:29:57.059810  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.059818  252125 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:57.059810  252125 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-921956"
	I1121 14:29:57.060267  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.060366  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.061615  252125 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:57.063049  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:57.087511  252125 addons.go:239] Setting addon default-storageclass=true in "no-preload-921956"
	I1121 14:29:57.087574  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.088046  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.088842  252125 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:57.090553  252125 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.090577  252125 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:57.090634  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.113518  252125 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.113567  252125 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:57.113644  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.116604  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.140626  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.162241  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:57.221336  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:57.237060  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.259845  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.393470  252125 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:57.394577  252125 node_ready.go:35] waiting up to 6m0s for node "no-preload-921956" to be "Ready" ...
	I1121 14:29:57.623024  252125 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:57.414885  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.125971322s)
	W1121 14:29:57.414929  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1121 14:29:57.414939  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:57.414952  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:57.462838  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:57.462881  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:57.526637  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:57.526671  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:57.574224  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:57.574259  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:57.624430  252125 addons.go:530] duration metric: took 564.759261ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:57.898009  252125 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-921956" context rescaled to 1 replicas
	I1121 14:29:56.163632  255774 addons.go:530] duration metric: took 551.031985ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:56.399602  255774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-376255" context rescaled to 1 replicas
	W1121 14:29:57.899680  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:29:57.174208  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:59.672116  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:00.114035  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1121 14:29:59.398191  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:01.898360  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:29:59.900344  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.900816  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:04.400331  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.672252  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:30:04.171805  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:05.672011  249617 node_ready.go:49] node "old-k8s-version-012258" is "Ready"
	I1121 14:30:05.672046  249617 node_ready.go:38] duration metric: took 15.003519412s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:30:05.672064  249617 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:05.672125  249617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:05.689799  249617 api_server.go:72] duration metric: took 15.377593574s to wait for apiserver process to appear ...
	I1121 14:30:05.689974  249617 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:05.690001  249617 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:30:05.696217  249617 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:30:05.697950  249617 api_server.go:141] control plane version: v1.28.0
	I1121 14:30:05.697978  249617 api_server.go:131] duration metric: took 7.994891ms to wait for apiserver health ...
	I1121 14:30:05.697990  249617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:05.702726  249617 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:05.702769  249617 system_pods.go:61] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.702778  249617 system_pods.go:61] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.702785  249617 system_pods.go:61] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.702796  249617 system_pods.go:61] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.702808  249617 system_pods.go:61] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.702818  249617 system_pods.go:61] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.702822  249617 system_pods.go:61] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.702829  249617 system_pods.go:61] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.702837  249617 system_pods.go:74] duration metric: took 4.84094ms to wait for pod list to return data ...
	I1121 14:30:05.702852  249617 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:05.705127  249617 default_sa.go:45] found service account: "default"
	I1121 14:30:05.705151  249617 default_sa.go:55] duration metric: took 2.290103ms for default service account to be created ...
	I1121 14:30:05.705161  249617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:05.710235  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.710318  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.710330  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.710337  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.710367  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.710374  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.710380  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.710385  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.710404  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.710597  249617 retry.go:31] will retry after 257.065607ms: missing components: kube-dns
	I1121 14:30:05.972608  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.972648  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.972657  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.972665  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.972676  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.972682  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.972687  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.972692  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.972707  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.972726  249617 retry.go:31] will retry after 339.692313ms: missing components: kube-dns
	I1121 14:30:06.317124  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:06.317155  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Running
	I1121 14:30:06.317160  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:06.317163  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:06.317167  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:06.317171  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:06.317175  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:06.317178  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:06.317181  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Running
	I1121 14:30:06.317188  249617 system_pods.go:126] duration metric: took 612.020803ms to wait for k8s-apps to be running ...
	I1121 14:30:06.317194  249617 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:06.317250  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:06.332295  249617 system_svc.go:56] duration metric: took 15.088564ms WaitForService to wait for kubelet
	I1121 14:30:06.332331  249617 kubeadm.go:587] duration metric: took 16.020134285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:06.332357  249617 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:06.338044  249617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:06.338071  249617 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:06.338084  249617 node_conditions.go:105] duration metric: took 5.72136ms to run NodePressure ...
	I1121 14:30:06.338096  249617 start.go:242] waiting for startup goroutines ...
	I1121 14:30:06.338102  249617 start.go:247] waiting for cluster config update ...
	I1121 14:30:06.338113  249617 start.go:256] writing updated cluster config ...
	I1121 14:30:06.338382  249617 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:06.342534  249617 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:06.347323  249617 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.352062  249617 pod_ready.go:94] pod "coredns-5dd5756b68-vst4c" is "Ready"
	I1121 14:30:06.352087  249617 pod_ready.go:86] duration metric: took 4.697932ms for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.354946  249617 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.359326  249617 pod_ready.go:94] pod "etcd-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.359355  249617 pod_ready.go:86] duration metric: took 4.388182ms for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.362007  249617 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.366060  249617 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.366081  249617 pod_ready.go:86] duration metric: took 4.051984ms for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.368789  249617 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.746914  249617 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.746952  249617 pod_ready.go:86] duration metric: took 378.141903ms for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.947790  249617 pod_ready.go:83] waiting for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.347266  249617 pod_ready.go:94] pod "kube-proxy-wsp2w" is "Ready"
	I1121 14:30:07.347291  249617 pod_ready.go:86] duration metric: took 399.477159ms for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.547233  249617 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946728  249617 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-012258" is "Ready"
	I1121 14:30:07.946756  249617 pod_ready.go:86] duration metric: took 399.500525ms for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946772  249617 pod_ready.go:40] duration metric: took 1.604187461s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.009909  249617 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1121 14:30:08.014607  249617 out.go:203] 
	W1121 14:30:08.016075  249617 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:30:08.020782  249617 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:30:08.022622  249617 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-012258" cluster and "default" namespace by default
	I1121 14:30:05.115052  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1121 14:30:05.115115  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:05.115188  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:05.143819  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.143839  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.143843  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:05.143846  213058 cri.go:89] found id: ""
	I1121 14:30:05.143853  213058 logs.go:282] 3 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:05.143912  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.148585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.152984  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.156944  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:05.157004  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:05.185404  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.185430  213058 cri.go:89] found id: ""
	I1121 14:30:05.185440  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:05.185498  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.190360  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:05.190432  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:05.222964  213058 cri.go:89] found id: ""
	I1121 14:30:05.222989  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.222999  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:05.223006  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:05.223058  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:05.254414  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:05.254436  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:05.254440  213058 cri.go:89] found id: ""
	I1121 14:30:05.254447  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:05.254505  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.258766  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.262456  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:05.262524  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:05.288454  213058 cri.go:89] found id: ""
	I1121 14:30:05.288486  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.288496  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:05.288505  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:05.288598  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:05.317814  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:05.317841  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:05.317847  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.317851  213058 cri.go:89] found id: ""
	I1121 14:30:05.317861  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:05.317930  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.322506  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.326684  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.330828  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:05.330957  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:05.360073  213058 cri.go:89] found id: ""
	I1121 14:30:05.360098  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.360107  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:05.360116  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:05.360171  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:05.388524  213058 cri.go:89] found id: ""
	I1121 14:30:05.388561  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.388573  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:05.388587  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:05.388602  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.427247  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:05.427279  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:05.517583  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:05.517615  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.556205  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:30:05.556238  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.601637  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:05.601692  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.642125  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:05.642167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:05.707252  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:05.707295  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:05.747947  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:05.747990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:05.767646  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:05.767678  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:04.398534  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.897181  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:08.897492  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.900285  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	I1121 14:30:07.400113  255774 node_ready.go:49] node "default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:07.400148  255774 node_ready.go:38] duration metric: took 11.503726167s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:30:07.400166  255774 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:07.400227  255774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:07.416428  255774 api_server.go:72] duration metric: took 11.804040955s to wait for apiserver process to appear ...
	I1121 14:30:07.416462  255774 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:07.416487  255774 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 14:30:07.423355  255774 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 14:30:07.424441  255774 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:07.424471  255774 api_server.go:131] duration metric: took 8.001103ms to wait for apiserver health ...
	I1121 14:30:07.424480  255774 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:07.428816  255774 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:07.428856  255774 system_pods.go:61] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.428866  255774 system_pods.go:61] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.428874  255774 system_pods.go:61] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.428880  255774 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.428886  255774 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.428891  255774 system_pods.go:61] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.428899  255774 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.428912  255774 system_pods.go:61] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.428921  255774 system_pods.go:74] duration metric: took 4.433771ms to wait for pod list to return data ...
	I1121 14:30:07.428932  255774 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:07.431771  255774 default_sa.go:45] found service account: "default"
	I1121 14:30:07.431794  255774 default_sa.go:55] duration metric: took 2.856811ms for default service account to be created ...
	I1121 14:30:07.431804  255774 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:07.435787  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.435816  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.435821  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.435826  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.435830  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.435833  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.435836  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.435841  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.435846  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.435871  255774 retry.go:31] will retry after 217.060579ms: missing components: kube-dns
	I1121 14:30:07.656900  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.656930  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.656937  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.656945  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.656950  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.656955  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.656959  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.656964  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.656970  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.656989  255774 retry.go:31] will retry after 330.648304ms: missing components: kube-dns
	I1121 14:30:07.995514  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.995612  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.995626  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.995636  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.995642  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.995653  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.995659  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.995664  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.995683  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.995713  255774 retry.go:31] will retry after 466.383408ms: missing components: kube-dns
	I1121 14:30:08.466385  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:08.466414  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Running
	I1121 14:30:08.466419  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:08.466423  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:08.466427  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:08.466430  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:08.466435  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:08.466438  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:08.466441  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Running
	I1121 14:30:08.466448  255774 system_pods.go:126] duration metric: took 1.034639333s to wait for k8s-apps to be running ...
	I1121 14:30:08.466454  255774 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:08.466495  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:08.480058  255774 system_svc.go:56] duration metric: took 13.59071ms WaitForService to wait for kubelet
	I1121 14:30:08.480087  255774 kubeadm.go:587] duration metric: took 12.867708638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:08.480104  255774 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:08.483054  255774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:08.483077  255774 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:08.483089  255774 node_conditions.go:105] duration metric: took 2.980591ms to run NodePressure ...
	I1121 14:30:08.483101  255774 start.go:242] waiting for startup goroutines ...
	I1121 14:30:08.483107  255774 start.go:247] waiting for cluster config update ...
	I1121 14:30:08.483116  255774 start.go:256] writing updated cluster config ...
	I1121 14:30:08.483378  255774 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:08.487457  255774 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.490869  255774 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.495613  255774 pod_ready.go:94] pod "coredns-66bc5c9577-fr27b" is "Ready"
	I1121 14:30:08.495638  255774 pod_ready.go:86] duration metric: took 4.745112ms for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.498070  255774 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.502098  255774 pod_ready.go:94] pod "etcd-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.502122  255774 pod_ready.go:86] duration metric: took 4.029361ms for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.504276  255774 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.508229  255774 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.508250  255774 pod_ready.go:86] duration metric: took 3.957821ms for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.510387  255774 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.891344  255774 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.891369  255774 pod_ready.go:86] duration metric: took 380.959206ms for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.091636  255774 pod_ready.go:83] waiting for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.492078  255774 pod_ready.go:94] pod "kube-proxy-hdplf" is "Ready"
	I1121 14:30:09.492108  255774 pod_ready.go:86] duration metric: took 400.444722ms for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.693278  255774 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092105  255774 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:10.092133  255774 pod_ready.go:86] duration metric: took 398.824976ms for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092146  255774 pod_ready.go:40] duration metric: took 1.604655578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:10.138628  255774 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:10.140593  255774 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-376255" cluster and "default" namespace by default
	I1121 14:30:08.754284  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.986586875s)
	W1121 14:30:08.754342  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1121 14:30:08.754352  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:08.754366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:08.789119  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:08.789149  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:08.842933  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:08.842974  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:08.880878  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:08.880919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:08.910920  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:08.910953  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.440020  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:11.440496  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:11.440556  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:11.440601  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:11.472645  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:11.472669  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:11.472674  213058 cri.go:89] found id: ""
	I1121 14:30:11.472683  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:11.472748  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.478061  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.482946  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:11.483034  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:11.517693  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:11.517722  213058 cri.go:89] found id: ""
	I1121 14:30:11.517732  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:11.517797  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.523621  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:11.523699  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:11.559155  213058 cri.go:89] found id: ""
	I1121 14:30:11.559194  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.559204  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:11.559212  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:11.559271  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:11.595093  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.595127  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:11.595133  213058 cri.go:89] found id: ""
	I1121 14:30:11.595143  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:11.595194  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.600085  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.604973  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:11.605048  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:11.639606  213058 cri.go:89] found id: ""
	I1121 14:30:11.639636  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.639647  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:11.639653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:11.639713  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:11.684373  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.684400  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.684405  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.684410  213058 cri.go:89] found id: ""
	I1121 14:30:11.684421  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:11.684482  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.689732  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.695253  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.701315  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:11.701388  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:11.732802  213058 cri.go:89] found id: ""
	I1121 14:30:11.732831  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.732841  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:11.732848  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:11.732907  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:11.761686  213058 cri.go:89] found id: ""
	I1121 14:30:11.761717  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.761729  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:11.761741  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:11.761756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.816634  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:11.816670  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.846024  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:11.846055  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.876932  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:11.876964  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.912984  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:11.913018  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:11.965381  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:11.965423  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:11.997477  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:11.997509  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:12.011497  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:12.011524  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:12.071024  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:12.071049  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:12.071065  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:12.106865  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:12.106898  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:12.141245  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:12.141276  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:12.176551  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:12.176600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:12.268742  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:12.268780  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	W1121 14:30:10.897620  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	I1121 14:30:11.398100  252125 node_ready.go:49] node "no-preload-921956" is "Ready"
	I1121 14:30:11.398128  252125 node_ready.go:38] duration metric: took 14.003530083s for node "no-preload-921956" to be "Ready" ...
	I1121 14:30:11.398142  252125 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:11.398195  252125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:11.412043  252125 api_server.go:72] duration metric: took 14.35241025s to wait for apiserver process to appear ...
	I1121 14:30:11.412070  252125 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:11.412087  252125 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:30:11.417254  252125 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:30:11.418517  252125 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:11.418570  252125 api_server.go:131] duration metric: took 6.492303ms to wait for apiserver health ...
	I1121 14:30:11.418581  252125 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:11.421927  252125 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:11.422024  252125 system_pods.go:61] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.422034  252125 system_pods.go:61] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.422047  252125 system_pods.go:61] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.422059  252125 system_pods.go:61] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.422069  252125 system_pods.go:61] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.422073  252125 system_pods.go:61] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.422077  252125 system_pods.go:61] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.422082  252125 system_pods.go:61] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.422094  252125 system_pods.go:74] duration metric: took 3.505153ms to wait for pod list to return data ...
	I1121 14:30:11.422109  252125 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:11.424685  252125 default_sa.go:45] found service account: "default"
	I1121 14:30:11.424710  252125 default_sa.go:55] duration metric: took 2.591611ms for default service account to be created ...
	I1121 14:30:11.424722  252125 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:11.427627  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.427680  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.427689  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.427703  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.427713  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.427721  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.427726  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.427731  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.427737  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.427768  252125 retry.go:31] will retry after 234.428318ms: missing components: kube-dns
	I1121 14:30:11.669788  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.669831  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.669840  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.669850  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.669858  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.669865  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.669871  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.669877  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.669893  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.669919  252125 retry.go:31] will retry after 250.085803ms: missing components: kube-dns
	I1121 14:30:11.924517  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.924602  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.924614  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.924627  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.924633  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.924642  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.924647  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.924653  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.924661  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.924682  252125 retry.go:31] will retry after 441.862758ms: missing components: kube-dns
	I1121 14:30:12.371065  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.371110  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:12.371122  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.371131  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.371136  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.371142  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.371147  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.371158  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.371170  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:12.371189  252125 retry.go:31] will retry after 502.578888ms: missing components: kube-dns
	I1121 14:30:12.879209  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.879243  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Running
	I1121 14:30:12.879249  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.879253  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.879258  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.879268  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.879271  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.879275  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.879278  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Running
	I1121 14:30:12.879289  252125 system_pods.go:126] duration metric: took 1.454561179s to wait for k8s-apps to be running ...
	I1121 14:30:12.879301  252125 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:12.879351  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:12.894061  252125 system_svc.go:56] duration metric: took 14.74714ms WaitForService to wait for kubelet
	I1121 14:30:12.894092  252125 kubeadm.go:587] duration metric: took 15.834465857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:12.894115  252125 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:12.897599  252125 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:12.897630  252125 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:12.897641  252125 node_conditions.go:105] duration metric: took 3.520753ms to run NodePressure ...
	I1121 14:30:12.897652  252125 start.go:242] waiting for startup goroutines ...
	I1121 14:30:12.897659  252125 start.go:247] waiting for cluster config update ...
	I1121 14:30:12.897669  252125 start.go:256] writing updated cluster config ...
	I1121 14:30:12.897983  252125 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:12.902897  252125 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:12.906562  252125 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.912263  252125 pod_ready.go:94] pod "coredns-66bc5c9577-s4rzb" is "Ready"
	I1121 14:30:12.912286  252125 pod_ready.go:86] duration metric: took 5.702456ms for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.915190  252125 pod_ready.go:83] waiting for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.919870  252125 pod_ready.go:94] pod "etcd-no-preload-921956" is "Ready"
	I1121 14:30:12.919896  252125 pod_ready.go:86] duration metric: took 4.68423ms for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.921926  252125 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.925984  252125 pod_ready.go:94] pod "kube-apiserver-no-preload-921956" is "Ready"
	I1121 14:30:12.926012  252125 pod_ready.go:86] duration metric: took 4.065762ms for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.928283  252125 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.307608  252125 pod_ready.go:94] pod "kube-controller-manager-no-preload-921956" is "Ready"
	I1121 14:30:13.307639  252125 pod_ready.go:86] duration metric: took 379.335151ms for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.508229  252125 pod_ready.go:83] waiting for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.907070  252125 pod_ready.go:94] pod "kube-proxy-wmx7z" is "Ready"
	I1121 14:30:13.907101  252125 pod_ready.go:86] duration metric: took 398.843128ms for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.108040  252125 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507264  252125 pod_ready.go:94] pod "kube-scheduler-no-preload-921956" is "Ready"
	I1121 14:30:14.507293  252125 pod_ready.go:86] duration metric: took 399.219492ms for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507307  252125 pod_ready.go:40] duration metric: took 1.604362709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:14.554506  252125 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:14.556366  252125 out.go:179] * Done! kubectl is now configured to use "no-preload-921956" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b902d4d95366e       56cc512116c8f       7 seconds ago       Running             busybox                   0                   650f980a2b9de       busybox                                          default
	4cd21f3197431       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   23e45253f8c7e       storage-provisioner                              kube-system
	5c05a4ce99693       ead0a4a53df89       12 seconds ago      Running             coredns                   0                   4a38fce5ce541       coredns-5dd5756b68-vst4c                         kube-system
	14f62b42937d6       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   2189620d082f5       kindnet-f6t7s                                    kube-system
	7b9fdeac6c297       ea1030da44aa1       27 seconds ago      Running             kube-proxy                0                   7e0d6db9e6b3d       kube-proxy-wsp2w                                 kube-system
	2ff2d15ad456d       f6f496300a2ae       46 seconds ago      Running             kube-scheduler            0                   a2abbb0781499       kube-scheduler-old-k8s-version-012258            kube-system
	bff5755d3bb4c       bb5e0dde9054c       46 seconds ago      Running             kube-apiserver            0                   0f35f911732de       kube-apiserver-old-k8s-version-012258            kube-system
	24c3a525c2057       73deb9a3f7025       46 seconds ago      Running             etcd                      0                   11bd8f3a7d6a7       etcd-old-k8s-version-012258                      kube-system
	9694941d50234       4be79c38a4bab       46 seconds ago      Running             kube-controller-manager   0                   45f5f9128f983       kube-controller-manager-old-k8s-version-012258   kube-system
	
	
	==> containerd <==
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.663617176Z" level=info msg="StartContainer for \"5c05a4ce996931fe774ecca66b33620ebb8a09a835d63b1f0ddd04105345bb76\""
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.663619446Z" level=info msg="Container 4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.664751507Z" level=info msg="connecting to shim 5c05a4ce996931fe774ecca66b33620ebb8a09a835d63b1f0ddd04105345bb76" address="unix:///run/containerd/s/0b88234bafabade7aa89e6626d296420e30066b3991abfec21350310268aa8a7" protocol=ttrpc version=3
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.672254961Z" level=info msg="CreateContainer within sandbox \"23e45253f8c7ee6d14427e06305531cf9d976c8c976bd1a48cedecbea7976313\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c\""
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.673493529Z" level=info msg="StartContainer for \"4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c\""
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.674511601Z" level=info msg="connecting to shim 4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c" address="unix:///run/containerd/s/a82bd5a517bceb0823436c092fd804897bb31601e146a9022325dd22f0adc41d" protocol=ttrpc version=3
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.728082486Z" level=info msg="StartContainer for \"4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c\" returns successfully"
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.728959460Z" level=info msg="StartContainer for \"5c05a4ce996931fe774ecca66b33620ebb8a09a835d63b1f0ddd04105345bb76\" returns successfully"
	Nov 21 14:30:08 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:08.528101810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:fa895e52-0bff-4604-8b62-fd0f087015e8,Namespace:default,Attempt:0,}"
	Nov 21 14:30:08 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:08.569589613Z" level=info msg="connecting to shim 650f980a2b9de14dfd5f63378bb97f102c6ac2132a9ada4c16a5ef068e7d2a2c" address="unix:///run/containerd/s/5e291cbce6d45d78977b32eb821eca28abc28581b57d5fa47a45bc5da629cfec" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:30:08 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:08.641364674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:fa895e52-0bff-4604-8b62-fd0f087015e8,Namespace:default,Attempt:0,} returns sandbox id \"650f980a2b9de14dfd5f63378bb97f102c6ac2132a9ada4c16a5ef068e7d2a2c\""
	Nov 21 14:30:08 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:08.643152152Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.895297688Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.896188926Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.897638365Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.900612481Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.901224670Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.258026607s"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.901267593Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.903245567Z" level=info msg="CreateContainer within sandbox \"650f980a2b9de14dfd5f63378bb97f102c6ac2132a9ada4c16a5ef068e7d2a2c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.911518272Z" level=info msg="Container b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.918169889Z" level=info msg="CreateContainer within sandbox \"650f980a2b9de14dfd5f63378bb97f102c6ac2132a9ada4c16a5ef068e7d2a2c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b\""
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.918839732Z" level=info msg="StartContainer for \"b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b\""
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.919846340Z" level=info msg="connecting to shim b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b" address="unix:///run/containerd/s/5e291cbce6d45d78977b32eb821eca28abc28581b57d5fa47a45bc5da629cfec" protocol=ttrpc version=3
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.971722510Z" level=info msg="StartContainer for \"b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b\" returns successfully"
	Nov 21 14:30:17 old-k8s-version-012258 containerd[665]: E1121 14:30:17.320736     665 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [5c05a4ce996931fe774ecca66b33620ebb8a09a835d63b1f0ddd04105345bb76] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46429 - 55004 "HINFO IN 8589807954474471726.703758692042272696. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027956792s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-012258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-012258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-012258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_29_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:29:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-012258
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:30:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:30:07 +0000   Fri, 21 Nov 2025 14:29:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:30:07 +0000   Fri, 21 Nov 2025 14:29:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:30:07 +0000   Fri, 21 Nov 2025 14:29:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:30:07 +0000   Fri, 21 Nov 2025 14:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-012258
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                b90c39b5-fac8-48f3-bfec-9ba818fb6bc5
	  Boot ID:                    f900700b-0668-4d24-87ff-85e15fbda365
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-vst4c                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-012258                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-f6t7s                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-012258             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-012258    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-wsp2w                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-012258             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 48s)  kubelet          Node old-k8s-version-012258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 48s)  kubelet          Node old-k8s-version-012258 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 48s)  kubelet          Node old-k8s-version-012258 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-012258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-012258 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-012258 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-012258 event: Registered Node old-k8s-version-012258 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-012258 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001887] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.440508] i8042: Warning: Keylock active
	[  +0.011202] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526419] block sda: the capability attribute has been deprecated.
	[  +0.095215] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027093] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.485024] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [24c3a525c2057be14d63a0b83d320542988e06c148db3abcea70288b84ad9d55] <==
	{"level":"info","ts":"2025-11-21T14:29:32.241252Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:29:32.243038Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-21T14:29:32.243254Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:29:32.243303Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:29:32.24334Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:29:32.24338Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:29:32.527604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-21T14:29:32.527651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-21T14:29:32.527692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-21T14:29:32.527708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:29:32.527717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-21T14:29:32.527728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-21T14:29:32.527737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-21T14:29:32.529559Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-012258 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:29:32.529578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:29:32.529669Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:29:32.529972Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:29:32.529994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-21T14:29:32.529757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:29:32.5309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-21T14:29:32.531625Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:29:32.53516Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:29:32.535207Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:29:32.536282Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-21T14:29:35.645599Z","caller":"traceutil/trace.go:171","msg":"trace[1619369888] transaction","detail":"{read_only:false; response_revision:181; number_of_response:1; }","duration":"103.859179ms","start":"2025-11-21T14:29:35.541719Z","end":"2025-11-21T14:29:35.645578Z","steps":["trace[1619369888] 'process raft request'  (duration: 101.685301ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:30:18 up  1:12,  0 user,  load average: 3.83, 3.02, 1.92
	Linux old-k8s-version-012258 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [14f62b42937d63a9d982189e10059fb863ccdf5ca3eedc2cdab43a2e258708b6] <==
	I1121 14:29:54.836873       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:29:54.837124       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1121 14:29:54.837288       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:29:54.837307       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:29:54.837325       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:29:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:29:55.132056       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:29:55.132129       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:29:55.132143       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:29:55.132319       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:29:55.432449       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:29:55.432473       1 metrics.go:72] Registering metrics
	I1121 14:29:55.432525       1 controller.go:711] "Syncing nftables rules"
	I1121 14:30:05.138150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:30:05.138210       1 main.go:301] handling current node
	I1121 14:30:15.134126       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:30:15.134169       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bff5755d3bb4c01170cea10eea2a0bd7eb5e4e85eff679e4fd11f262f20d8b28] <==
	I1121 14:29:34.045351       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1121 14:29:34.047124       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:29:34.047217       1 shared_informer.go:318] Caches are synced for configmaps
	I1121 14:29:34.051166       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:29:34.059678       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:29:34.059713       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:29:34.059721       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:29:34.059728       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:29:34.059737       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:29:34.239983       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:29:34.956388       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:29:34.961744       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:29:34.961779       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:29:35.529678       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:29:35.676651       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:29:35.776358       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:29:35.783426       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1121 14:29:35.785070       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:29:35.792737       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:29:35.992086       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:29:37.085397       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:29:37.099935       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:29:37.111942       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1121 14:29:50.620131       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1121 14:29:50.819999       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9694941d5023471382cb75dbe0e35927477b046c67f0406d94b0c2eab9737245] <==
	I1121 14:29:49.846641       1 shared_informer.go:318] Caches are synced for disruption
	I1121 14:29:49.855897       1 shared_informer.go:318] Caches are synced for stateful set
	I1121 14:29:49.881551       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1121 14:29:49.969509       1 shared_informer.go:318] Caches are synced for attach detach
	I1121 14:29:50.014167       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:29:50.025976       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:29:50.366198       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:29:50.366669       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:29:50.381693       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:29:50.624660       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1121 14:29:50.704235       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1121 14:29:50.830312       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wsp2w"
	I1121 14:29:50.831838       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-f6t7s"
	I1121 14:29:50.927521       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vst4c"
	I1121 14:29:50.936234       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qv6fz"
	I1121 14:29:50.964100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="339.351723ms"
	I1121 14:29:50.978176       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qv6fz"
	I1121 14:29:50.986743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.057827ms"
	I1121 14:29:50.996010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.927032ms"
	I1121 14:29:50.996568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="343.659µs"
	I1121 14:30:05.215933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.246µs"
	I1121 14:30:05.230917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="161.827µs"
	I1121 14:30:06.296502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.965394ms"
	I1121 14:30:06.296638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.82µs"
	I1121 14:30:09.770369       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [7b9fdeac6c297da9e16ba05abceeee4a77258137fd28986a17f946713c8ad0fe] <==
	I1121 14:29:51.457956       1 server_others.go:69] "Using iptables proxy"
	I1121 14:29:51.467641       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1121 14:29:51.489328       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:29:51.492051       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:29:51.492086       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:29:51.492094       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:29:51.492128       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:29:51.492424       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:29:51.492443       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:29:51.493149       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:29:51.493193       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:29:51.493154       1 config.go:188] "Starting service config controller"
	I1121 14:29:51.493237       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:29:51.493237       1 config.go:315] "Starting node config controller"
	I1121 14:29:51.493252       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:29:51.593782       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:29:51.593822       1 shared_informer.go:318] Caches are synced for node config
	I1121 14:29:51.593799       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2ff2d15ad456d7eabe7dc6efd47603a67afa696fd1091b577b9633b6669bd9ec] <==
	W1121 14:29:34.007803       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1121 14:29:34.007838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1121 14:29:34.007899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1121 14:29:34.007919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1121 14:29:34.904012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1121 14:29:34.904113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1121 14:29:34.906819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1121 14:29:34.906855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1121 14:29:34.982047       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1121 14:29:34.982173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1121 14:29:35.046771       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1121 14:29:35.046802       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:29:35.065222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1121 14:29:35.065262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:29:35.119288       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1121 14:29:35.119329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1121 14:29:35.148021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1121 14:29:35.148079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1121 14:29:35.156816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1121 14:29:35.156866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1121 14:29:35.323566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1121 14:29:35.323609       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1121 14:29:35.347343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1121 14:29:35.347400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1121 14:29:38.002740       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:29:49 old-k8s-version-012258 kubelet[1516]: I1121 14:29:49.923571    1516 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.836162    1516 topology_manager.go:215] "Topology Admit Handler" podUID="bc079c02-40ff-4f10-947b-76f1e9784572" podNamespace="kube-system" podName="kube-proxy-wsp2w"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.839382    1516 topology_manager.go:215] "Topology Admit Handler" podUID="bd28a6b5-0214-42be-8883-1adf1217761c" podNamespace="kube-system" podName="kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.946858    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc079c02-40ff-4f10-947b-76f1e9784572-xtables-lock\") pod \"kube-proxy-wsp2w\" (UID: \"bc079c02-40ff-4f10-947b-76f1e9784572\") " pod="kube-system/kube-proxy-wsp2w"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.948665    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bd28a6b5-0214-42be-8883-1adf1217761c-cni-cfg\") pod \"kindnet-f6t7s\" (UID: \"bd28a6b5-0214-42be-8883-1adf1217761c\") " pod="kube-system/kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.949046    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd28a6b5-0214-42be-8883-1adf1217761c-xtables-lock\") pod \"kindnet-f6t7s\" (UID: \"bd28a6b5-0214-42be-8883-1adf1217761c\") " pod="kube-system/kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.949101    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgrts\" (UniqueName: \"kubernetes.io/projected/bc079c02-40ff-4f10-947b-76f1e9784572-kube-api-access-vgrts\") pod \"kube-proxy-wsp2w\" (UID: \"bc079c02-40ff-4f10-947b-76f1e9784572\") " pod="kube-system/kube-proxy-wsp2w"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.950051    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd28a6b5-0214-42be-8883-1adf1217761c-lib-modules\") pod \"kindnet-f6t7s\" (UID: \"bd28a6b5-0214-42be-8883-1adf1217761c\") " pod="kube-system/kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.950176    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcpxl\" (UniqueName: \"kubernetes.io/projected/bd28a6b5-0214-42be-8883-1adf1217761c-kube-api-access-jcpxl\") pod \"kindnet-f6t7s\" (UID: \"bd28a6b5-0214-42be-8883-1adf1217761c\") " pod="kube-system/kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.950220    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bc079c02-40ff-4f10-947b-76f1e9784572-kube-proxy\") pod \"kube-proxy-wsp2w\" (UID: \"bc079c02-40ff-4f10-947b-76f1e9784572\") " pod="kube-system/kube-proxy-wsp2w"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.950255    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc079c02-40ff-4f10-947b-76f1e9784572-lib-modules\") pod \"kube-proxy-wsp2w\" (UID: \"bc079c02-40ff-4f10-947b-76f1e9784572\") " pod="kube-system/kube-proxy-wsp2w"
	Nov 21 14:29:55 old-k8s-version-012258 kubelet[1516]: I1121 14:29:55.257777    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wsp2w" podStartSLOduration=5.257722111 podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.247909441 +0000 UTC m=+15.198590387" watchObservedRunningTime="2025-11-21 14:29:55.257722111 +0000 UTC m=+18.208403071"
	Nov 21 14:29:55 old-k8s-version-012258 kubelet[1516]: I1121 14:29:55.257917    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-f6t7s" podStartSLOduration=2.158617096 podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="2025-11-21 14:29:51.458699826 +0000 UTC m=+14.409380763" lastFinishedPulling="2025-11-21 14:29:54.557970689 +0000 UTC m=+17.508651626" observedRunningTime="2025-11-21 14:29:55.257276178 +0000 UTC m=+18.207957124" watchObservedRunningTime="2025-11-21 14:29:55.257887959 +0000 UTC m=+18.208568906"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.191422    1516 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.216103    1516 topology_manager.go:215] "Topology Admit Handler" podUID="3ca4df79-d875-498c-91b8-059d4f975bd0" podNamespace="kube-system" podName="coredns-5dd5756b68-vst4c"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.222388    1516 topology_manager.go:215] "Topology Admit Handler" podUID="4195d236-52f6-4bfd-b47a-9cd7cd89bedd" podNamespace="kube-system" podName="storage-provisioner"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.242068    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp9f\" (UniqueName: \"kubernetes.io/projected/3ca4df79-d875-498c-91b8-059d4f975bd0-kube-api-access-2cp9f\") pod \"coredns-5dd5756b68-vst4c\" (UID: \"3ca4df79-d875-498c-91b8-059d4f975bd0\") " pod="kube-system/coredns-5dd5756b68-vst4c"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.242125    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69bsj\" (UniqueName: \"kubernetes.io/projected/4195d236-52f6-4bfd-b47a-9cd7cd89bedd-kube-api-access-69bsj\") pod \"storage-provisioner\" (UID: \"4195d236-52f6-4bfd-b47a-9cd7cd89bedd\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.242163    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ca4df79-d875-498c-91b8-059d4f975bd0-config-volume\") pod \"coredns-5dd5756b68-vst4c\" (UID: \"3ca4df79-d875-498c-91b8-059d4f975bd0\") " pod="kube-system/coredns-5dd5756b68-vst4c"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.242194    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4195d236-52f6-4bfd-b47a-9cd7cd89bedd-tmp\") pod \"storage-provisioner\" (UID: \"4195d236-52f6-4bfd-b47a-9cd7cd89bedd\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:06 old-k8s-version-012258 kubelet[1516]: I1121 14:30:06.278995    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.278943202 podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:06.278908115 +0000 UTC m=+29.229589065" watchObservedRunningTime="2025-11-21 14:30:06.278943202 +0000 UTC m=+29.229624148"
	Nov 21 14:30:06 old-k8s-version-012258 kubelet[1516]: I1121 14:30:06.289341    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vst4c" podStartSLOduration=16.289291859 podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:06.28907602 +0000 UTC m=+29.239756965" watchObservedRunningTime="2025-11-21 14:30:06.289291859 +0000 UTC m=+29.239972805"
	Nov 21 14:30:08 old-k8s-version-012258 kubelet[1516]: I1121 14:30:08.218808    1516 topology_manager.go:215] "Topology Admit Handler" podUID="fa895e52-0bff-4604-8b62-fd0f087015e8" podNamespace="default" podName="busybox"
	Nov 21 14:30:08 old-k8s-version-012258 kubelet[1516]: I1121 14:30:08.263005    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbpfl\" (UniqueName: \"kubernetes.io/projected/fa895e52-0bff-4604-8b62-fd0f087015e8-kube-api-access-cbpfl\") pod \"busybox\" (UID: \"fa895e52-0bff-4604-8b62-fd0f087015e8\") " pod="default/busybox"
	Nov 21 14:30:11 old-k8s-version-012258 kubelet[1516]: I1121 14:30:11.294015    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.035211506 podCreationTimestamp="2025-11-21 14:30:08 +0000 UTC" firstStartedPulling="2025-11-21 14:30:08.642848367 +0000 UTC m=+31.593529296" lastFinishedPulling="2025-11-21 14:30:10.901611757 +0000 UTC m=+33.852292703" observedRunningTime="2025-11-21 14:30:11.293488867 +0000 UTC m=+34.244169813" watchObservedRunningTime="2025-11-21 14:30:11.293974913 +0000 UTC m=+34.244655858"
	
	
	==> storage-provisioner [4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c] <==
	I1121 14:30:05.736193       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:30:05.746379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:30:05.746443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1121 14:30:05.754349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:30:05.754427       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ece7dbe-e611-46b3-879d-c0179ba2fde1", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-012258_d783fa48-77b0-4408-a80f-68458be19abb became leader
	I1121 14:30:05.754523       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-012258_d783fa48-77b0-4408-a80f-68458be19abb!
	I1121 14:30:05.855459       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-012258_d783fa48-77b0-4408-a80f-68458be19abb!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012258 -n old-k8s-version-012258
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-012258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-012258
helpers_test.go:243: (dbg) docker inspect old-k8s-version-012258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d",
	        "Created": "2025-11-21T14:29:18.305605728Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:29:18.348841908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d/hosts",
	        "LogPath": "/var/lib/docker/containers/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d/b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d-json.log",
	        "Name": "/old-k8s-version-012258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-012258:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-012258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b631b0b0e9d5aafe8f15c853910a13c50352a0ccce9accbcd62a4ea770c78c5d",
	                "LowerDir": "/var/lib/docker/overlay2/4ea3913a068d8b871d800eefdd7cdd11e4851e7b5031ea166038678d2b0108e1-init/diff:/var/lib/docker/overlay2/a649757dd9587fa5a20ca8a56ec1923099f2a5e912dc7e8e1dfa08e79248b59f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4ea3913a068d8b871d800eefdd7cdd11e4851e7b5031ea166038678d2b0108e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4ea3913a068d8b871d800eefdd7cdd11e4851e7b5031ea166038678d2b0108e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4ea3913a068d8b871d800eefdd7cdd11e4851e7b5031ea166038678d2b0108e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-012258",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-012258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-012258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-012258",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-012258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "46765a8ec6da2ef06d0a63c5e792b68206b48e74aeaeb299bf506ff70e7dcffd",
	            "SandboxKey": "/var/run/docker/netns/46765a8ec6da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-012258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecee753316979a1bb886a50ec401a80f6274b9bc39c4a8bb1732e91064c178b9",
	                    "EndpointID": "c92e22445c114f178de1b5adf2a20b74000e44859ae25f57affa69d30eb60100",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "9e:cd:46:05:9b:55",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-012258",
	                        "b631b0b0e9d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012258 -n old-k8s-version-012258
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-012258 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-012258 logs -n 25: (1.24243583s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-459127 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo containerd config dump                                                                                                                                                                                                        │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cert-expiration-371956                                                                                                                                                                                                                           │ cert-expiration-371956       │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ -p cilium-459127 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo crio config                                                                                                                                                                                                                   │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cilium-459127                                                                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ start   │ -p cert-options-733993 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p force-systemd-flag-730471 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p NoKubernetes-187733 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │                     │
	│ delete  │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ cert-options-733993 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p cert-options-733993 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p cert-options-733993                                                                                                                                                                                                                              │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ force-systemd-flag-730471 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p force-systemd-flag-730471                                                                                                                                                                                                                        │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:29:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:29:24.877938  255774 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:29:24.878133  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.878179  255774 out.go:374] Setting ErrFile to fd 2...
	I1121 14:29:24.878200  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.879901  255774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:29:24.881344  255774 out.go:368] Setting JSON to false
	I1121 14:29:24.883254  255774 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4307,"bootTime":1763731058,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:29:24.883372  255774 start.go:143] virtualization: kvm guest
	I1121 14:29:24.885483  255774 out.go:179] * [default-k8s-diff-port-376255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:29:24.887201  255774 notify.go:221] Checking for updates...
	I1121 14:29:24.887242  255774 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:29:24.890729  255774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:29:24.892963  255774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:24.894677  255774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:29:24.897870  255774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:29:24.899765  255774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:29:24.902854  255774 config.go:182] Loaded profile config "kubernetes-upgrade-797080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903030  255774 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903162  255774 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:24.903312  255774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:29:24.939143  255774 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:29:24.939248  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.025144  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.01035373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.025295  255774 docker.go:319] overlay module found
	I1121 14:29:25.027378  255774 out.go:179] * Using the docker driver based on user configuration
	I1121 14:29:22.611340  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.611365  249617 ubuntu.go:182] provisioning hostname "old-k8s-version-012258"
	I1121 14:29:22.611426  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.635589  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.635869  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.635891  249617 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-012258 && echo "old-k8s-version-012258" | sudo tee /etc/hostname
	I1121 14:29:22.796661  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.796754  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.822578  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.822834  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.822860  249617 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-012258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-012258/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-012258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:22.970644  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:22.970676  249617 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:22.970732  249617 ubuntu.go:190] setting up certificates
	I1121 14:29:22.970743  249617 provision.go:84] configureAuth start
	I1121 14:29:22.970826  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:22.991118  249617 provision.go:143] copyHostCerts
	I1121 14:29:22.991183  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:22.991193  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:22.991250  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:22.991367  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:22.991381  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:22.991414  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:22.991488  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:22.991499  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:22.991526  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:22.991627  249617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-012258 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-012258]
	I1121 14:29:23.140756  249617 provision.go:177] copyRemoteCerts
	I1121 14:29:23.140833  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:23.140885  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.161751  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.269718  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:23.292619  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:29:23.314336  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:29:23.337086  249617 provision.go:87] duration metric: took 366.309314ms to configureAuth
	I1121 14:29:23.337129  249617 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:23.337306  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:23.337320  249617 machine.go:97] duration metric: took 3.89496072s to provisionDockerMachine
	I1121 14:29:23.337326  249617 client.go:176] duration metric: took 11.527957207s to LocalClient.Create
	I1121 14:29:23.337344  249617 start.go:167] duration metric: took 11.528071392s to libmachine.API.Create "old-k8s-version-012258"
	I1121 14:29:23.337352  249617 start.go:293] postStartSetup for "old-k8s-version-012258" (driver="docker")
	I1121 14:29:23.337365  249617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:23.337422  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:23.337471  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.359217  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.466089  249617 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:23.470146  249617 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:23.470174  249617 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:23.470185  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:23.470249  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:23.470349  249617 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:23.470480  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:23.479086  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:23.506776  249617 start.go:296] duration metric: took 169.402964ms for postStartSetup
	I1121 14:29:23.507166  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.527044  249617 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/config.json ...
	I1121 14:29:23.527374  249617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:23.527425  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.546669  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.645314  249617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:23.650498  249617 start.go:128] duration metric: took 11.844529266s to createHost
	I1121 14:29:23.650523  249617 start.go:83] releasing machines lock for "old-k8s-version-012258", held for 11.844683904s
	I1121 14:29:23.650592  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.671161  249617 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:23.671227  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.671321  249617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:23.671403  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.694189  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.694196  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.856609  249617 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:23.863273  249617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:23.867917  249617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:23.867991  249617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:23.895679  249617 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:23.895707  249617 start.go:496] detecting cgroup driver to use...
	I1121 14:29:23.895742  249617 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:23.895805  249617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:23.911897  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:23.925350  249617 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:23.925400  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:23.943424  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:23.962675  249617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:24.059689  249617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:24.169263  249617 docker.go:234] disabling docker service ...
	I1121 14:29:24.169325  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:24.191949  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:24.206181  249617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:24.319402  249617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:24.455060  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:24.472888  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:24.497138  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1121 14:29:24.524424  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:24.536491  249617 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:24.536702  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:24.547193  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.559919  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:24.571627  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.581977  249617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:24.629839  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:24.640310  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:24.650595  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:24.660801  249617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:24.669493  249617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:24.677810  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:24.781513  249617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:24.929576  249617 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:24.929707  249617 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:24.936782  249617 start.go:564] Will wait 60s for crictl version
	I1121 14:29:24.936893  249617 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.942453  249617 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:24.986447  249617 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:24.986527  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.018021  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.051308  249617 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1121 14:29:25.029036  255774 start.go:309] selected driver: docker
	I1121 14:29:25.029056  255774 start.go:930] validating driver "docker" against <nil>
	I1121 14:29:25.029071  255774 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:29:25.029977  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.123370  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.11156096 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.123696  255774 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:29:25.124078  255774 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:29:25.125758  255774 out.go:179] * Using Docker driver with root privileges
	I1121 14:29:25.127166  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.127249  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.127262  255774 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:29:25.127353  255774 start.go:353] cluster config:
	{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:25.129454  255774 out.go:179] * Starting "default-k8s-diff-port-376255" primary control-plane node in "default-k8s-diff-port-376255" cluster
	I1121 14:29:25.130961  255774 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:29:25.132637  255774 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:29:25.134190  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:25.134237  255774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 14:29:25.134251  255774 cache.go:65] Caching tarball of preloaded images
	I1121 14:29:25.134262  255774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:29:25.134379  255774 preload.go:238] Found /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1121 14:29:25.134391  255774 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:29:25.134520  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:25.134560  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json: {Name:mk1db0ba6952ac549a7eae06783e73916a7ad392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.161339  255774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:29:25.161363  255774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:29:25.161384  255774 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:29:25.161419  255774 start.go:360] acquireMachinesLock for default-k8s-diff-port-376255: {Name:mka18b3ecaec4bae205bc7951f90400738bef300 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:29:25.161518  255774 start.go:364] duration metric: took 79.824µs to acquireMachinesLock for "default-k8s-diff-port-376255"
	I1121 14:29:25.161561  255774 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:25.161653  255774 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:29:25.055066  249617 cli_runner.go:164] Run: docker network inspect old-k8s-version-012258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.085953  249617 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:25.093859  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.111432  249617 kubeadm.go:884] updating cluster {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:25.111671  249617 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:29:25.111753  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.143860  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.143888  249617 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:25.143953  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.174770  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.174789  249617 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:25.174797  249617 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1121 14:29:25.174897  249617 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-012258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:25.174970  249617 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:25.211311  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.211341  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.211371  249617 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:25.211401  249617 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-012258 NodeName:old-k8s-version-012258 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:25.211596  249617 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-012258"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:25.211673  249617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:29:25.224124  249617 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:25.224202  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:25.235430  249617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1121 14:29:25.254181  249617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:25.283842  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1121 14:29:25.302971  249617 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:25.309092  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.325170  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:25.438037  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:25.469767  249617 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258 for IP: 192.168.94.2
	I1121 14:29:25.469790  249617 certs.go:195] generating shared ca certs ...
	I1121 14:29:25.469811  249617 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.470023  249617 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:25.470095  249617 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:25.470105  249617 certs.go:257] generating profile certs ...
	I1121 14:29:25.470177  249617 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key
	I1121 14:29:25.470199  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt with IP's: []
	I1121 14:29:25.634340  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt ...
	I1121 14:29:25.634374  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt: {Name:mk5e1a3132436dad740351857d527e3c45fff4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648586  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key ...
	I1121 14:29:25.648625  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key: {Name:mk757010d91a13b26eb1340def496546bee9bf26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648791  249617 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc
	I1121 14:29:25.648816  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1121 14:29:25.817862  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc ...
	I1121 14:29:25.817892  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc: {Name:mk8a482343e99af6e8bdd7e52a6e5b813685beb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818099  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc ...
	I1121 14:29:25.818121  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc: {Name:mk4cf761e884b2a77e105e39ad6b0495b59b5aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818237  249617 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt
	I1121 14:29:25.818331  249617 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key
	I1121 14:29:25.818390  249617 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key
	I1121 14:29:25.818406  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt with IP's: []
	I1121 14:29:26.390351  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt ...
	I1121 14:29:26.390391  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt: {Name:mk37207f300780275f6aa5331fc436d60739196c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390599  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key ...
	I1121 14:29:26.390617  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key: {Name:mkff5d416178c38a50235608b783c3957bee8456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390849  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:26.390898  249617 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:26.390913  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:26.390946  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:26.390988  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:26.391029  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:26.391086  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:26.391817  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:26.418450  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:26.446063  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:26.469197  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:26.493823  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:29:26.526847  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:26.555176  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:25.915600  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:25.916118  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:25.916177  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:25.916228  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:25.948057  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:25.948080  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:25.948087  213058 cri.go:89] found id: ""
	I1121 14:29:25.948096  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:25.948160  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.952634  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.956801  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:25.956870  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:25.990988  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:25.991014  213058 cri.go:89] found id: ""
	I1121 14:29:25.991024  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:25.991083  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.995665  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:25.995736  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:26.031577  213058 cri.go:89] found id: ""
	I1121 14:29:26.031604  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.031612  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:26.031618  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:26.031665  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:26.064880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.064907  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.064912  213058 cri.go:89] found id: ""
	I1121 14:29:26.064922  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:26.064979  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.070274  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.075659  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:26.075731  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:26.108079  213058 cri.go:89] found id: ""
	I1121 14:29:26.108108  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.108118  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:26.108125  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:26.108181  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:26.138988  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:26.139018  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.139024  213058 cri.go:89] found id: ""
	I1121 14:29:26.139034  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:26.139096  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.143487  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.147564  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:26.147631  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:26.185747  213058 cri.go:89] found id: ""
	I1121 14:29:26.185774  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.185785  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:26.185793  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:26.185848  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:26.220265  213058 cri.go:89] found id: ""
	I1121 14:29:26.220296  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.220308  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:26.220321  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:26.220335  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.265042  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:26.265072  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:26.402636  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:26.402672  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:26.484531  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:26.484565  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:26.484581  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:26.534239  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:26.534294  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:26.579971  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:26.580016  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:26.643693  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:26.643727  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:26.683712  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:26.683748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:26.702800  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:26.702836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:26.741813  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:26.741845  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.812944  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:26.812997  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.855307  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:26.855347  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:24.308535  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1121 14:29:24.308619  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.317176  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1121 14:29:24.317245  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.318774  252125 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1121 14:29:24.318825  252125 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.318867  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.328208  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1121 14:29:24.328249  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1121 14:29:24.328291  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.328305  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.328664  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1121 14:29:24.328708  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1121 14:29:24.335839  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1121 14:29:24.335900  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.337631  252125 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1121 14:29:24.337672  252125 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.337713  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.346363  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.346443  252125 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1121 14:29:24.346484  252125 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.346517  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361284  252125 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1121 14:29:24.361331  252125 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.361375  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361424  252125 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1121 14:29:24.361445  252125 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.361477  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.366787  252125 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1121 14:29:24.366831  252125 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:29:24.366871  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379457  252125 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1121 14:29:24.379503  252125 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.379558  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379677  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.388569  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.388608  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.388658  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.388681  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.388574  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.418705  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.418763  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.427350  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.434639  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.434777  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.437430  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.437452  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.477986  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.478027  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.478099  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:29:24.478334  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:24.478136  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.485019  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.485026  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.489362  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.521124  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.521651  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:29:24.521767  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:24.553384  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:29:24.553425  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1121 14:29:24.553522  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:29:24.553632  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:24.553699  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1121 14:29:24.553755  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.553769  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:29:24.553803  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:29:24.553853  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:24.553860  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:24.553893  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:29:24.553920  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:29:24.553945  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:24.553945  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1121 14:29:24.565027  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:29:24.565077  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1121 14:29:24.565153  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:29:24.565169  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1121 14:29:24.574297  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:29:24.574338  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1121 14:29:24.574363  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:29:24.574390  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:29:24.574393  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1121 14:29:24.574407  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1121 14:29:24.784169  252125 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.784246  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.964305  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:29:25.029557  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.029626  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.445459  252125 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1121 14:29:25.445578  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691152  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.661495413s)
	I1121 14:29:26.691188  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:29:26.691209  252125 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691206  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.245604103s)
	I1121 14:29:26.691250  252125 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1121 14:29:26.691264  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691297  252125 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691347  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.696141  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.404441617s)
	I1121 14:29:28.100696  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.409327822s)
	I1121 14:29:28.100767  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:29:28.100803  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.100853  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.132780  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:25.163849  255774 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:29:25.164318  255774 start.go:159] libmachine.API.Create for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:25.164395  255774 client.go:173] LocalClient.Create starting
	I1121 14:29:25.164513  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem
	I1121 14:29:25.164575  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164605  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.164704  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem
	I1121 14:29:25.164760  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164776  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.165330  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:29:25.188513  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:29:25.188614  255774 network_create.go:284] running [docker network inspect default-k8s-diff-port-376255] to gather additional debugging logs...
	I1121 14:29:25.188640  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255
	W1121 14:29:25.213297  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 returned with exit code 1
	I1121 14:29:25.213338  255774 network_create.go:287] error running [docker network inspect default-k8s-diff-port-376255]: docker network inspect default-k8s-diff-port-376255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-376255 not found
	I1121 14:29:25.213435  255774 network_create.go:289] output of [docker network inspect default-k8s-diff-port-376255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-376255 not found
	
	** /stderr **
	I1121 14:29:25.213589  255774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.240844  255774 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
	I1121 14:29:25.241874  255774 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39921db0d513 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:e4:85:98:a5:e3} reservation:<nil>}
	I1121 14:29:25.242975  255774 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-36a8741c90a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:21:99:72:63:4a} reservation:<nil>}
	I1121 14:29:25.244042  255774 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-63d543fc8bbd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:58:40:d2:33:c4} reservation:<nil>}
	I1121 14:29:25.245269  255774 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb46e0}
	I1121 14:29:25.245303  255774 network_create.go:124] attempt to create docker network default-k8s-diff-port-376255 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:29:25.245384  255774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 default-k8s-diff-port-376255
	I1121 14:29:25.322210  255774 network_create.go:108] docker network default-k8s-diff-port-376255 192.168.85.0/24 created
	I1121 14:29:25.322244  255774 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-376255" container
	I1121 14:29:25.322309  255774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:29:25.346732  255774 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-376255 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:29:25.374919  255774 oci.go:103] Successfully created a docker volume default-k8s-diff-port-376255
	I1121 14:29:25.374994  255774 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-376255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --entrypoint /usr/bin/test -v default-k8s-diff-port-376255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:29:26.343288  255774 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-376255
	I1121 14:29:26.343370  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:26.343387  255774 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:29:26.343457  255774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:29:26.582319  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:26.606403  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:26.635408  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:26.661287  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:26.686582  249617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:26.703157  249617 ssh_runner.go:195] Run: openssl version
	I1121 14:29:26.712353  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:26.725593  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732381  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732523  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.774823  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:26.785127  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:26.796035  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800685  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800751  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.842185  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:26.852632  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:26.863838  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869571  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869642  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.922017  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:26.934065  249617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:26.939457  249617 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:26.939526  249617 kubeadm.go:401] StartCluster: {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:26.939648  249617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:26.939710  249617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:26.978114  249617 cri.go:89] found id: ""
	I1121 14:29:26.978192  249617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:26.989363  249617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:27.000529  249617 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:27.000603  249617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:27.012158  249617 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:27.012179  249617 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:27.012231  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:27.022084  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:27.022141  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:27.034139  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:27.044897  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:27.045038  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:27.056593  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.066532  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:27.066615  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.077925  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:27.088254  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:27.088320  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:27.098442  249617 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:27.205509  249617 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:27.290009  249617 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:29.388121  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:29.388594  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:29.388645  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:29.388690  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:29.416964  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:29.416991  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.416996  213058 cri.go:89] found id: ""
	I1121 14:29:29.417006  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:29.417074  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.421476  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.425483  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:29.425557  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:29.453687  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:29.453708  213058 cri.go:89] found id: ""
	I1121 14:29:29.453718  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:29.453783  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.458267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:29.458353  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:29.485804  213058 cri.go:89] found id: ""
	I1121 14:29:29.485865  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.485876  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:29.485883  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:29.485940  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:29.514265  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:29.514290  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.514294  213058 cri.go:89] found id: ""
	I1121 14:29:29.514302  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:29.514349  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.518626  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.522446  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:29.522501  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:29.549770  213058 cri.go:89] found id: ""
	I1121 14:29:29.549799  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.549811  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:29.549819  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:29.549868  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:29.577193  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.577217  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.577222  213058 cri.go:89] found id: ""
	I1121 14:29:29.577230  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:29.577288  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.581256  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.585291  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:29.585347  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:29.614632  213058 cri.go:89] found id: ""
	I1121 14:29:29.614664  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.614674  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:29.614682  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:29.614740  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:29.645697  213058 cri.go:89] found id: ""
	I1121 14:29:29.645721  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.645730  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:29.645741  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:29.645756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.675578  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:29.675607  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:29.718952  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:29.718990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:29.750089  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:29.750117  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:29.858708  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:29.858738  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.902976  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:29.903013  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.938083  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:29.938118  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.976329  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:29.976366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:29.991448  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:29.991485  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:30.053990  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:30.054015  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:30.054032  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:30.089042  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:30.089076  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:30.124498  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:30.124528  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.685601  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:32.686035  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:32.686089  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:32.686144  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:32.744948  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:32.745095  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:32.745132  213058 cri.go:89] found id: ""
	I1121 14:29:32.745169  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:32.745355  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.752020  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.760837  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:32.761106  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:32.807418  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:32.807451  213058 cri.go:89] found id: ""
	I1121 14:29:32.807462  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:32.807521  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.813216  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:32.813289  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:32.852598  213058 cri.go:89] found id: ""
	I1121 14:29:32.852633  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.852645  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:32.852653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:32.852711  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:32.889120  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.889144  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:32.889148  213058 cri.go:89] found id: ""
	I1121 14:29:32.889157  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:32.889211  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.894834  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.900572  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:32.900646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:32.937810  213058 cri.go:89] found id: ""
	I1121 14:29:32.937836  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.937846  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:32.937853  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:32.937914  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:32.975713  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:32.975735  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:32.975741  213058 cri.go:89] found id: ""
	I1121 14:29:32.975751  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:32.975815  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.981574  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.985965  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:32.986030  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:33.019894  213058 cri.go:89] found id: ""
	I1121 14:29:33.019923  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.019935  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:33.019949  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:33.020009  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:33.051872  213058 cri.go:89] found id: ""
	I1121 14:29:33.051901  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.051911  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:33.051923  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:33.051937  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:33.103114  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:33.103153  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:33.142816  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:33.142846  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:33.209677  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:33.209736  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:33.255185  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:33.255220  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:33.272562  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:33.272600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:33.319098  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:33.319132  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:33.366245  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:33.366286  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:33.410624  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:33.410660  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:33.458217  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:33.458253  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:33.586879  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:33.586919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:29.835800  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.734910291s)
	I1121 14:29:29.835838  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:29:29.835860  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835902  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835802  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.702989246s)
	I1121 14:29:29.835965  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:29:29.836056  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:29.840842  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:29:29.840873  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1121 14:29:32.866902  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (3.030968163s)
	I1121 14:29:32.866941  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:29:32.866961  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:32.867002  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:31.901829  255774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.558304176s)
	I1121 14:29:31.901864  255774 kic.go:203] duration metric: took 5.558473353s to extract preloaded images to volume ...
	W1121 14:29:31.901941  255774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:29:31.901969  255774 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:29:31.902010  255774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:29:31.985847  255774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-376255 --name default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --network default-k8s-diff-port-376255 --ip 192.168.85.2 --volume default-k8s-diff-port-376255:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:29:32.403824  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Running}}
	I1121 14:29:32.427802  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.456228  255774 cli_runner.go:164] Run: docker exec default-k8s-diff-port-376255 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:29:32.514766  255774 oci.go:144] the created container "default-k8s-diff-port-376255" has a running status.
	I1121 14:29:32.514799  255774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa...
	I1121 14:29:32.829505  255774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:29:32.861911  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.888316  255774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:29:32.888342  255774 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-376255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:29:32.948121  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.975355  255774 machine.go:94] provisionDockerMachine start ...
	I1121 14:29:32.975799  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:33.002463  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:33.002813  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:33.002834  255774 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:29:33.003677  255774 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37682->127.0.0.1:33070: read: connection reset by peer
	I1121 14:29:37.228254  249617 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:29:37.228434  249617 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:37.228644  249617 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:37.228822  249617 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:37.228907  249617 kubeadm.go:319] OS: Linux
	I1121 14:29:37.228971  249617 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:37.229029  249617 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:37.229111  249617 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:37.229198  249617 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:37.229264  249617 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:37.229333  249617 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:37.229403  249617 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:37.229468  249617 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:37.229624  249617 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:37.229762  249617 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:37.229892  249617 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:29:37.230051  249617 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.235113  249617 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:37.235306  249617 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:37.235508  249617 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:37.235691  249617 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:37.235858  249617 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:37.236102  249617 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:37.236205  249617 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:37.236303  249617 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:37.236516  249617 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236607  249617 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:37.236765  249617 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236861  249617 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:37.236954  249617 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:37.237021  249617 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:37.237104  249617 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:37.237178  249617 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:37.237257  249617 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:37.237352  249617 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:37.237438  249617 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:37.237554  249617 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:37.237649  249617 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:37.239227  249617 out.go:252]   - Booting up control plane ...
	I1121 14:29:37.239369  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:37.239534  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:37.239682  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:37.239829  249617 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:37.239965  249617 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:37.240022  249617 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:37.240260  249617 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:29:37.240373  249617 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.503152 seconds
	I1121 14:29:37.240759  249617 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:37.240933  249617 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:37.241035  249617 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:37.241286  249617 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-012258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:37.241409  249617 kubeadm.go:319] [bootstrap-token] Using token: yix385.n0xejrlt7sdx1ngs
	I1121 14:29:37.243198  249617 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:37.243379  249617 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:37.243497  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:37.243755  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:37.243946  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:37.244147  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:37.244287  249617 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:37.244477  249617 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:37.244564  249617 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:37.244632  249617 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:37.244642  249617 kubeadm.go:319] 
	I1121 14:29:37.244725  249617 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:37.244736  249617 kubeadm.go:319] 
	I1121 14:29:37.244834  249617 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:37.244845  249617 kubeadm.go:319] 
	I1121 14:29:37.244877  249617 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:37.244966  249617 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:37.245033  249617 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:37.245045  249617 kubeadm.go:319] 
	I1121 14:29:37.245111  249617 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:37.245120  249617 kubeadm.go:319] 
	I1121 14:29:37.245178  249617 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:37.245192  249617 kubeadm.go:319] 
	I1121 14:29:37.245274  249617 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:37.245371  249617 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:37.245468  249617 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:37.245476  249617 kubeadm.go:319] 
	I1121 14:29:37.245604  249617 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:37.245734  249617 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:37.245755  249617 kubeadm.go:319] 
	I1121 14:29:37.245866  249617 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246024  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:37.246062  249617 kubeadm.go:319] 	--control-plane 
	I1121 14:29:37.246072  249617 kubeadm.go:319] 
	I1121 14:29:37.246178  249617 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:37.246189  249617 kubeadm.go:319] 
	I1121 14:29:37.246294  249617 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246443  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:37.246454  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.246462  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.248274  249617 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:36.147516  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.147569  255774 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-376255"
	I1121 14:29:36.147633  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.169609  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.169898  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.169928  255774 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376255 && echo "default-k8s-diff-port-376255" | sudo tee /etc/hostname
	I1121 14:29:36.328958  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.329040  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.353105  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.353414  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.353448  255774 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376255/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:36.504067  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:36.504097  255774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:36.504119  255774 ubuntu.go:190] setting up certificates
	I1121 14:29:36.504133  255774 provision.go:84] configureAuth start
	I1121 14:29:36.504206  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:36.528674  255774 provision.go:143] copyHostCerts
	I1121 14:29:36.528752  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:36.528762  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:36.528840  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:36.528968  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:36.528997  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:36.529043  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:36.529141  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:36.529152  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:36.529188  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:36.529281  255774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376255 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-376255 localhost minikube]
	I1121 14:29:36.617208  255774 provision.go:177] copyRemoteCerts
	I1121 14:29:36.617283  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:36.617345  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.639948  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.749486  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:36.777360  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 14:29:36.804875  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:29:36.830920  255774 provision.go:87] duration metric: took 326.762892ms to configureAuth
	I1121 14:29:36.830953  255774 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:36.831165  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:36.831181  255774 machine.go:97] duration metric: took 3.855604158s to provisionDockerMachine
	I1121 14:29:36.831191  255774 client.go:176] duration metric: took 11.666782197s to LocalClient.Create
	I1121 14:29:36.831216  255774 start.go:167] duration metric: took 11.666902979s to libmachine.API.Create "default-k8s-diff-port-376255"
	I1121 14:29:36.831234  255774 start.go:293] postStartSetup for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:36.831254  255774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:36.831311  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:36.831360  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.855811  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.969760  255774 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:36.974452  255774 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:36.974529  255774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:36.974577  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:36.974658  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:36.974771  255774 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:36.974903  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:36.984975  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:37.017462  255774 start.go:296] duration metric: took 186.210262ms for postStartSetup
	I1121 14:29:37.017947  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.041309  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:37.041659  255774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:37.041731  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.070697  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.177189  255774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:37.185711  255774 start.go:128] duration metric: took 12.024042461s to createHost
	I1121 14:29:37.185741  255774 start.go:83] releasing machines lock for "default-k8s-diff-port-376255", held for 12.024206528s
	I1121 14:29:37.185820  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.211853  255774 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:37.211903  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.211965  255774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:37.212033  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.238575  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.242252  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.421321  255774 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:37.431728  255774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:37.437939  255774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:37.438053  255774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:37.469409  255774 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:37.469437  255774 start.go:496] detecting cgroup driver to use...
	I1121 14:29:37.469471  255774 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:37.469521  255774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:37.490669  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:37.507754  255774 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:37.507821  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:37.525644  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:37.545289  255774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:37.674060  255774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:37.795128  255774 docker.go:234] disabling docker service ...
	I1121 14:29:37.795198  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:37.819043  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:37.834819  255774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:37.960408  255774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:38.072269  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:38.089314  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:38.105248  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:29:38.117445  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:38.128509  255774 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:38.128607  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:38.139526  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.150896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:38.161459  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.173179  255774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:38.183645  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:38.194923  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:38.207896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:38.220346  255774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:38.230823  255774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:38.241807  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.339708  255774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:38.460319  255774 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:38.460387  255774 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:38.465812  255774 start.go:564] Will wait 60s for crictl version
	I1121 14:29:38.465875  255774 ssh_runner.go:195] Run: which crictl
	I1121 14:29:38.470166  255774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:38.507773  255774 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:38.507860  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.532247  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.559098  255774 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1121 14:29:33.655577  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:33.655599  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:33.655612  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.225853  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:36.226247  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:36.226304  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:36.226364  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:36.259583  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:36.259613  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.259619  213058 cri.go:89] found id: ""
	I1121 14:29:36.259628  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:36.259690  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.264798  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.269597  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:36.269663  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:36.304312  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:36.304335  213058 cri.go:89] found id: ""
	I1121 14:29:36.304346  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:36.304403  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.309760  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:36.309833  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:36.342617  213058 cri.go:89] found id: ""
	I1121 14:29:36.342643  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.342653  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:36.342660  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:36.342722  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:36.378880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.378909  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:36.378914  213058 cri.go:89] found id: ""
	I1121 14:29:36.378924  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:36.378996  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.384032  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.388866  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:36.388932  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:36.427253  213058 cri.go:89] found id: ""
	I1121 14:29:36.427282  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.427293  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:36.427300  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:36.427355  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:36.461581  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:36.461604  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:36.461609  213058 cri.go:89] found id: ""
	I1121 14:29:36.461618  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:36.461677  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.466623  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.471422  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:36.471490  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:36.503502  213058 cri.go:89] found id: ""
	I1121 14:29:36.503533  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.503566  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:36.503575  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:36.503633  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:36.538350  213058 cri.go:89] found id: ""
	I1121 14:29:36.538379  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.538390  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:36.538404  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:36.538419  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:36.666987  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:36.667025  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:36.685628  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:36.685659  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:36.763464  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:36.763491  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:36.763508  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.808789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:36.808832  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.887558  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:36.887596  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:36.952391  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:36.952434  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:36.993139  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:36.993167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:37.037499  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:37.037552  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:37.084237  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:37.084270  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:37.132236  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:37.132272  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:37.172720  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:37.172753  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:34.341753  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.474720913s)
	I1121 14:29:34.341781  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:29:34.341812  252125 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:34.341855  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:37.308520  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.966633628s)
	I1121 14:29:37.308585  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:29:37.308616  252125 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.308666  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.772300  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:29:37.772349  252125 cache_images.go:125] Successfully loaded all cached images
	I1121 14:29:37.772358  252125 cache_images.go:94] duration metric: took 13.627858156s to LoadCachedImages
	I1121 14:29:37.772375  252125 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1121 14:29:37.772522  252125 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-921956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:37.772622  252125 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:37.802988  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.803017  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.803041  252125 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:37.803067  252125 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-921956 NodeName:no-preload-921956 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:37.803212  252125 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-921956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:37.803298  252125 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.814189  252125 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:29:37.814255  252125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.824124  252125 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1121 14:29:37.824214  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1121 14:29:37.824231  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:29:37.824217  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1121 14:29:37.829417  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:29:37.829466  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1121 14:29:38.860713  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:29:38.875498  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:29:38.880447  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:29:38.880477  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1121 14:29:39.014274  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:29:39.021151  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:29:39.021187  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1121 14:29:39.234010  252125 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:39.244382  252125 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1121 14:29:39.259897  252125 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:39.279143  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1121 14:29:38.560688  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:38.580956  255774 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:38.585728  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.599140  255774 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:38.599295  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:38.599391  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.631637  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.631660  255774 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:38.631720  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.665498  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.665522  255774 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:38.665530  255774 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1121 14:29:38.665659  255774 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:38.665752  255774 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:38.694106  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:38.694138  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:38.694156  255774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:38.694182  255774 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376255 NodeName:default-k8s-diff-port-376255 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:38.694318  255774 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-376255"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:38.694377  255774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:38.704016  255774 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:38.704074  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:38.712471  255774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1121 14:29:38.726311  255774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:38.743589  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1121 14:29:38.759275  255774 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:38.763723  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.775814  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.870850  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:38.898876  255774 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255 for IP: 192.168.85.2
	I1121 14:29:38.898898  255774 certs.go:195] generating shared ca certs ...
	I1121 14:29:38.898917  255774 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:38.899068  255774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:38.899116  255774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:38.899130  255774 certs.go:257] generating profile certs ...
	I1121 14:29:38.899196  255774 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key
	I1121 14:29:38.899223  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt with IP's: []
	I1121 14:29:39.101636  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt ...
	I1121 14:29:39.101669  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: {Name:mk48f410a390b01d5b10a9357a2648374ae8306b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.101873  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key ...
	I1121 14:29:39.101885  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key: {Name:mkb89c45215e08640f5b5fa9a6de6863ea0983e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.102008  255774 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066
	I1121 14:29:39.102024  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:29:39.438352  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 ...
	I1121 14:29:39.438387  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066: {Name:mkc5f7dc938a9541dec0c2accd850515b39a25d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438574  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 ...
	I1121 14:29:39.438586  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066: {Name:mka67f2d91e35acd02a0ed4174188db6877ef796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438666  255774 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt
	I1121 14:29:39.438744  255774 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key
	I1121 14:29:39.438811  255774 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key
	I1121 14:29:39.438826  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt with IP's: []
	I1121 14:29:39.523793  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt ...
	I1121 14:29:39.523827  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt: {Name:mk2418751bb08ae4f2cae2628ba430b2e731f823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524011  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key ...
	I1121 14:29:39.524031  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key: {Name:mk12031f310020bd38886fd870544563c6ab1faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524255  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:39.524307  255774 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:39.524323  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:39.524353  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:39.524383  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:39.524407  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:39.524445  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:39.525071  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:39.546065  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:39.565880  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:39.585450  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:39.604394  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 14:29:39.623736  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:39.642460  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:39.661463  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:39.681314  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:39.879137  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:39.899730  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:39.918630  255774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:39.935942  255774 ssh_runner.go:195] Run: openssl version
	I1121 14:29:39.943062  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.020861  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026152  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026209  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.067681  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.077051  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.087944  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092369  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092434  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.132125  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.142255  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.152828  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157171  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157265  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.199881  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.210053  255774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.214456  255774 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.214524  255774 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.214625  255774 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.214692  255774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.249359  255774 cri.go:89] found id: ""
	I1121 14:29:40.249429  255774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.259121  255774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.270847  255774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.270910  255774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.283266  255774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.283287  255774 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.283341  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1121 14:29:40.293676  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.293725  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.303277  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.313015  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.313073  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.322086  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.330920  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.331015  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.339376  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.347984  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.348046  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.356683  255774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.404354  255774 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.404455  255774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.435448  255774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.435583  255774 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.435628  255774 kubeadm.go:319] OS: Linux
	I1121 14:29:40.435689  255774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.435827  255774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.435905  255774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.436039  255774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.436108  255774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.436176  255774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.436276  255774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.436351  255774 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.508224  255774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.508370  255774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.508531  255774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.513996  255774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:39.295828  252125 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:39.301164  252125 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:39.312709  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:39.400897  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:39.429294  252125 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956 for IP: 192.168.103.2
	I1121 14:29:39.429315  252125 certs.go:195] generating shared ca certs ...
	I1121 14:29:39.429332  252125 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.429485  252125 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:39.429583  252125 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:39.429600  252125 certs.go:257] generating profile certs ...
	I1121 14:29:39.429678  252125 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key
	I1121 14:29:39.429693  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt with IP's: []
	I1121 14:29:39.556088  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt ...
	I1121 14:29:39.556115  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: {Name:mkc697edce2d4ccb5a4a2ccbe74255aef4a205c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556297  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key ...
	I1121 14:29:39.556312  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key: {Name:mkad7b167b883af61314c3f8b6c71358edc782dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556419  252125 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d
	I1121 14:29:39.556435  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1121 14:29:39.871499  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d ...
	I1121 14:29:39.871529  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d: {Name:mkc839b1c936af809ed1159ef4599336fd260d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871726  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d ...
	I1121 14:29:39.871748  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d: {Name:mkc2f0abcac84f6547f3e0edb165e90b14fdd7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871882  252125 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt
	I1121 14:29:39.871997  252125 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key
	I1121 14:29:39.872096  252125 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key
	I1121 14:29:39.872120  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt with IP's: []
	I1121 14:29:40.083173  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt ...
	I1121 14:29:40.083201  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt: {Name:mkba7efd029f616230e0b3cf14c4f32abac0549e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083385  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key ...
	I1121 14:29:40.083414  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key: {Name:mk24f6fbb57f5dfce4a401be193e0a832a6ccf6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083661  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:40.083700  252125 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:40.083711  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:40.083749  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:40.083780  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:40.083827  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:40.083887  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:40.084653  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:40.106430  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:40.126520  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:40.148412  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:40.169973  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:29:40.191493  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:29:40.214458  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:40.234692  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:29:40.261986  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:40.352437  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:40.372804  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:40.394700  252125 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:40.411183  252125 ssh_runner.go:195] Run: openssl version
	I1121 14:29:40.419607  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.431060  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436371  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436429  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.481320  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.492797  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.502878  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507432  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507499  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.567779  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.577673  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.587826  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592472  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592528  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.627626  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.637464  252125 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.641884  252125 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.641943  252125 kubeadm.go:401] StartCluster: {Name:no-preload-921956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.642030  252125 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.642085  252125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.673351  252125 cri.go:89] found id: ""
	I1121 14:29:40.673423  252125 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.682715  252125 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.691493  252125 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.691581  252125 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.700143  252125 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.700160  252125 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.700205  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:40.708734  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.708799  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.717135  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.726191  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.726262  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.734074  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.742647  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.742709  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.751091  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.759770  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.759841  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.768253  252125 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.810825  252125 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.810892  252125 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.831836  252125 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.831940  252125 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.832026  252125 kubeadm.go:319] OS: Linux
	I1121 14:29:40.832115  252125 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.832212  252125 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.832286  252125 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.832358  252125 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.832432  252125 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.832504  252125 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.832668  252125 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.832735  252125 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.895341  252125 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.895491  252125 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.895637  252125 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.901358  252125 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.249631  249617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:37.262987  249617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:29:37.263020  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:37.283444  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:38.138719  249617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:38.138808  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.138810  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-012258 minikube.k8s.io/updated_at=2025_11_21T14_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-012258 minikube.k8s.io/primary=true
	I1121 14:29:38.150782  249617 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:38.225220  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.726231  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.225533  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.725591  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.225601  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.725734  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:41.226112  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.521190  255774 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.521325  255774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.521431  255774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.003970  255774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.240665  255774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.425685  255774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:41.689428  255774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:41.923373  255774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:41.923563  255774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.051973  255774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.052979  255774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.277531  255774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:42.491572  255774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:42.605458  255774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:42.605535  255774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:42.870659  255774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:43.039072  255774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:43.228611  255774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:43.489903  255774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:43.563271  255774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:43.563948  255774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:43.568453  255774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:39.727688  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:39.728083  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:39.728134  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:39.728197  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:39.758413  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:39.758436  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:39.758441  213058 cri.go:89] found id: ""
	I1121 14:29:39.758452  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:39.758508  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.763439  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.767912  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:39.767980  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:39.802923  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:39.802948  213058 cri.go:89] found id: ""
	I1121 14:29:39.802957  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:39.803013  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.807778  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:39.807853  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:39.835286  213058 cri.go:89] found id: ""
	I1121 14:29:39.835314  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.835335  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:39.835343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:39.835408  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:39.864986  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:39.865034  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:39.865040  213058 cri.go:89] found id: ""
	I1121 14:29:39.865050  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:39.865105  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.869441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.873676  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:39.873739  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:39.902671  213058 cri.go:89] found id: ""
	I1121 14:29:39.902698  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.902707  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:39.902715  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:39.902762  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:39.933452  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:39.933477  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:39.933483  213058 cri.go:89] found id: ""
	I1121 14:29:39.933492  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:39.933557  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.938051  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.942029  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:39.942094  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:39.969991  213058 cri.go:89] found id: ""
	I1121 14:29:39.970018  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.970028  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:39.970036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:39.970086  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:39.997381  213058 cri.go:89] found id: ""
	I1121 14:29:39.997406  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.997417  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:39.997429  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:39.997443  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:40.027188  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:40.027213  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:40.067878  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:40.067906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:40.101358  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:40.101388  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:40.115674  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:40.115704  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:40.153845  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:40.153871  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:40.188913  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:40.188944  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:40.244995  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:40.245033  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:40.351506  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:40.351558  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:40.417221  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:40.417244  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:40.417263  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:40.457789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:40.457836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.520712  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:40.520748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.056648  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:43.057094  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:43.057150  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:43.057204  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:43.085236  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.085260  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.085265  213058 cri.go:89] found id: ""
	I1121 14:29:43.085275  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:43.085333  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.089868  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.094074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:43.094134  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:43.122420  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.122447  213058 cri.go:89] found id: ""
	I1121 14:29:43.122457  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:43.122512  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.126830  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:43.126892  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:43.156518  213058 cri.go:89] found id: ""
	I1121 14:29:43.156566  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.156577  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:43.156584  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:43.156646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:43.185212  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:43.185233  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.185238  213058 cri.go:89] found id: ""
	I1121 14:29:43.185277  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:43.185338  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.190000  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.194074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:43.194131  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:43.224175  213058 cri.go:89] found id: ""
	I1121 14:29:43.224201  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.224211  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:43.224218  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:43.224277  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:43.258260  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:43.258292  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.258299  213058 cri.go:89] found id: ""
	I1121 14:29:43.258310  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:43.258378  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.263276  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.268195  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:43.268264  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:43.303269  213058 cri.go:89] found id: ""
	I1121 14:29:43.303300  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.303311  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:43.303319  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:43.303379  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:43.333956  213058 cri.go:89] found id: ""
	I1121 14:29:43.333985  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.333995  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:43.334007  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:43.334021  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:43.366338  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:43.366369  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:43.458987  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:43.459027  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.497960  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:43.497995  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.539997  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:43.540035  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.575882  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:43.575911  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.903405  252125 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.903502  252125 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.903630  252125 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.180390  252125 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.211121  252125 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.523007  252125 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:42.461521  252125 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:42.641495  252125 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:42.641701  252125 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.773640  252125 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.773843  252125 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.921369  252125 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:43.256203  252125 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:43.834470  252125 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:43.834645  252125 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:43.949422  252125 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:44.093777  252125 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:44.227287  252125 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:44.509482  252125 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:44.696294  252125 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:44.696767  252125 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:44.705846  252125 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:43.573374  255774 out.go:252]   - Booting up control plane ...
	I1121 14:29:43.573510  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:43.573669  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:43.573781  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:43.590344  255774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:43.590494  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:43.599838  255774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:43.600184  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:43.600247  255774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:43.720721  255774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:43.720878  255774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:44.721899  255774 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001196965s
	I1121 14:29:44.724830  255774 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:44.724972  255774 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1121 14:29:44.725131  255774 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:44.725253  255774 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:41.726266  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.225460  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.725727  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.225740  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.725669  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.225350  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.725651  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.226025  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.725289  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:46.226316  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.632243  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:43.632278  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.681909  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:43.681959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.723402  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:43.723454  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:43.776606  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:43.776641  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:43.793171  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:43.793200  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:43.854264  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:43.854293  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:43.854308  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.383659  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:46.384075  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:46.384128  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:46.384191  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:46.441629  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.441734  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:46.441754  213058 cri.go:89] found id: ""
	I1121 14:29:46.441776  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:46.441873  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.447714  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.453337  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:46.453422  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:46.497451  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.497475  213058 cri.go:89] found id: ""
	I1121 14:29:46.497485  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:46.497585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.504731  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:46.504801  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:46.562972  213058 cri.go:89] found id: ""
	I1121 14:29:46.563014  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.563027  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:46.563036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:46.563287  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:46.611186  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:46.611216  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:46.611221  213058 cri.go:89] found id: ""
	I1121 14:29:46.611231  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:46.611289  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.620404  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.626388  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:46.626559  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:46.674192  213058 cri.go:89] found id: ""
	I1121 14:29:46.674247  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.674259  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:46.674267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:46.674448  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:46.749738  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.749765  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:46.749771  213058 cri.go:89] found id: ""
	I1121 14:29:46.749780  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:46.749835  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.756273  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.763986  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:46.764120  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:46.811858  213058 cri.go:89] found id: ""
	I1121 14:29:46.811883  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.811901  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:46.811909  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:46.811963  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:46.849599  213058 cri.go:89] found id: ""
	I1121 14:29:46.849645  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.849655  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:46.849666  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:46.849683  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.913988  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:46.914024  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.953189  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:46.953227  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:47.001663  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:47.001705  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:47.041106  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:47.041137  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:47.107673  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:47.107712  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:47.240432  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:47.240473  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:47.288852  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:47.288894  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:46.531314  255774 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.80645272s
	I1121 14:29:47.509316  255774 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.784421033s
	I1121 14:29:49.226647  255774 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501794549s
	I1121 14:29:49.239409  255774 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:49.252719  255774 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:49.264076  255774 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:49.264371  255774 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-376255 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:49.274799  255774 kubeadm.go:319] [bootstrap-token] Using token: 8nwcfl.9utqukqcvuro6a4p
	I1121 14:29:44.769338  252125 out.go:252]   - Booting up control plane ...
	I1121 14:29:44.769476  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:44.769652  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:44.769771  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:44.769940  252125 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:44.770087  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:44.778391  252125 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:44.779655  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:44.779729  252125 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:44.894196  252125 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:44.894364  252125 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:45.895053  252125 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000974959s
	I1121 14:29:45.898754  252125 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:45.898875  252125 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1121 14:29:45.899003  252125 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:45.899149  252125 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:48.621169  252125 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.722350043s
	I1121 14:29:49.059709  252125 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.160801257s
	I1121 14:29:49.276414  255774 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:49.276590  255774 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:49.280532  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:49.287374  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:49.290401  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:49.293308  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:49.297552  255774 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:49.632747  255774 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:46.726037  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.228665  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.725338  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.226199  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.725959  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.225812  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.725337  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.225293  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.310282  249617 kubeadm.go:1114] duration metric: took 12.17154172s to wait for elevateKubeSystemPrivileges
	I1121 14:29:50.310322  249617 kubeadm.go:403] duration metric: took 23.370802852s to StartCluster
	I1121 14:29:50.310347  249617 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.310438  249617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:50.311864  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.312167  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:50.312169  249617 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:50.312267  249617 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:50.312352  249617 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312372  249617 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-012258"
	I1121 14:29:50.312403  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.312458  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:50.312516  249617 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312530  249617 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-012258"
	I1121 14:29:50.312827  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.312965  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.314603  249617 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:50.316238  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:50.339724  249617 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:50.056893  255774 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:50.634602  255774 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:50.635720  255774 kubeadm.go:319] 
	I1121 14:29:50.635840  255774 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:50.635916  255774 kubeadm.go:319] 
	I1121 14:29:50.636085  255774 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:50.636139  255774 kubeadm.go:319] 
	I1121 14:29:50.636189  255774 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:50.636300  255774 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:50.636386  255774 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:50.636448  255774 kubeadm.go:319] 
	I1121 14:29:50.636574  255774 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:50.636584  255774 kubeadm.go:319] 
	I1121 14:29:50.636647  255774 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:50.636652  255774 kubeadm.go:319] 
	I1121 14:29:50.636709  255774 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:50.636796  255774 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:50.636878  255774 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:50.636886  255774 kubeadm.go:319] 
	I1121 14:29:50.636981  255774 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:50.637083  255774 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:50.637090  255774 kubeadm.go:319] 
	I1121 14:29:50.637247  255774 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637414  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:50.637449  255774 kubeadm.go:319] 	--control-plane 
	I1121 14:29:50.637460  255774 kubeadm.go:319] 
	I1121 14:29:50.637571  255774 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:50.637580  255774 kubeadm.go:319] 
	I1121 14:29:50.637672  255774 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637785  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:50.642202  255774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:50.642513  255774 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:50.642647  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:50.642693  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:50.645524  255774 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:50.339929  249617 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-012258"
	I1121 14:29:50.339977  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.340433  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.341133  249617 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.341154  249617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:50.341208  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.377822  249617 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.377846  249617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:50.377844  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.377907  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.410483  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.415901  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:50.468678  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:50.503643  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.536480  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.667362  249617 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:50.668484  249617 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:29:50.954598  249617 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:50.401999  252125 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502477764s
	I1121 14:29:50.419850  252125 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:50.933016  252125 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:50.948821  252125 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:50.949093  252125 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-921956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:50.961417  252125 kubeadm.go:319] [bootstrap-token] Using token: uhuim0.7wh8hbt7v76eo7qs
	I1121 14:29:50.955828  249617 addons.go:530] duration metric: took 643.55365ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:51.174831  249617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-012258" context rescaled to 1 replicas
	I1121 14:29:50.963415  252125 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:50.963588  252125 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:50.971176  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:50.980644  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:50.985255  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:50.989946  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:50.994015  252125 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:51.128309  252125 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:51.550178  252125 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:52.128624  252125 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:52.129402  252125 kubeadm.go:319] 
	I1121 14:29:52.129496  252125 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:52.129528  252125 kubeadm.go:319] 
	I1121 14:29:52.129657  252125 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:52.129669  252125 kubeadm.go:319] 
	I1121 14:29:52.129705  252125 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:52.129798  252125 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:52.129906  252125 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:52.129923  252125 kubeadm.go:319] 
	I1121 14:29:52.129995  252125 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:52.130004  252125 kubeadm.go:319] 
	I1121 14:29:52.130078  252125 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:52.130087  252125 kubeadm.go:319] 
	I1121 14:29:52.130170  252125 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:52.130304  252125 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:52.130418  252125 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:52.130446  252125 kubeadm.go:319] 
	I1121 14:29:52.130574  252125 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:52.130677  252125 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:52.130685  252125 kubeadm.go:319] 
	I1121 14:29:52.130797  252125 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.130966  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:52.131000  252125 kubeadm.go:319] 	--control-plane 
	I1121 14:29:52.131035  252125 kubeadm.go:319] 
	I1121 14:29:52.131212  252125 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:52.131230  252125 kubeadm.go:319] 
	I1121 14:29:52.131343  252125 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.131485  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:52.132830  252125 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:52.132967  252125 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:52.133003  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:52.133014  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:52.134968  252125 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:52.136241  252125 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:52.141107  252125 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:52.141131  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:52.155585  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:52.395340  252125 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:52.395422  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.395526  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-921956 minikube.k8s.io/updated_at=2025_11_21T14_29_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-921956 minikube.k8s.io/primary=true
	I1121 14:29:52.481012  252125 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:52.481125  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.982198  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.481748  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.981282  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.646815  255774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:50.654615  255774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:50.654642  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:50.673887  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:50.944978  255774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:50.945143  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.945309  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-376255 minikube.k8s.io/updated_at=2025_11_21T14_29_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=default-k8s-diff-port-376255 minikube.k8s.io/primary=true
	I1121 14:29:50.960009  255774 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:51.036596  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:51.537134  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.037345  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.536941  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.037592  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.536966  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.036678  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.536697  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.037499  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.536808  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.610391  255774 kubeadm.go:1114] duration metric: took 4.665295307s to wait for elevateKubeSystemPrivileges
	I1121 14:29:55.610426  255774 kubeadm.go:403] duration metric: took 15.395907943s to StartCluster
	I1121 14:29:55.610448  255774 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.610511  255774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:55.612071  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.612346  255774 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:55.612498  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:55.612612  255774 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:55.612696  255774 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612713  255774 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.612745  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.612775  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:55.612835  255774 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612852  255774 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376255"
	I1121 14:29:55.613218  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613392  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613476  255774 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:55.615420  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:55.641842  255774 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.641893  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.642317  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.647007  255774 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:55.648771  255774 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.648807  255774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:55.648882  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.679690  255774 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.679713  255774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:55.679780  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.680868  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.703091  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.713751  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:55.781953  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:55.795189  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.811872  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.895061  255774 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:55.896386  255774 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:29:56.162438  255774 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:29:52.672645  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:55.172665  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:29:54.481750  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.981303  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.481778  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.981846  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.481336  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.981822  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:57.056720  252125 kubeadm.go:1114] duration metric: took 4.66135199s to wait for elevateKubeSystemPrivileges
	I1121 14:29:57.056760  252125 kubeadm.go:403] duration metric: took 16.414821557s to StartCluster
	I1121 14:29:57.056783  252125 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.056866  252125 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:57.059279  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.059591  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:57.059595  252125 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:57.059668  252125 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:57.059755  252125 addons.go:70] Setting storage-provisioner=true in profile "no-preload-921956"
	I1121 14:29:57.059780  252125 addons.go:239] Setting addon storage-provisioner=true in "no-preload-921956"
	I1121 14:29:57.059783  252125 addons.go:70] Setting default-storageclass=true in profile "no-preload-921956"
	I1121 14:29:57.059810  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.059818  252125 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:57.059810  252125 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-921956"
	I1121 14:29:57.060267  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.060366  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.061615  252125 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:57.063049  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:57.087511  252125 addons.go:239] Setting addon default-storageclass=true in "no-preload-921956"
	I1121 14:29:57.087574  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.088046  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.088842  252125 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:57.090553  252125 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.090577  252125 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:57.090634  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.113518  252125 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.113567  252125 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:57.113644  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.116604  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.140626  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.162241  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:57.221336  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:57.237060  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.259845  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.393470  252125 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:57.394577  252125 node_ready.go:35] waiting up to 6m0s for node "no-preload-921956" to be "Ready" ...
	I1121 14:29:57.623024  252125 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:57.414885  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.125971322s)
	W1121 14:29:57.414929  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1121 14:29:57.414939  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:57.414952  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:57.462838  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:57.462881  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:57.526637  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:57.526671  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:57.574224  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:57.574259  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:57.624430  252125 addons.go:530] duration metric: took 564.759261ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:57.898009  252125 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-921956" context rescaled to 1 replicas
	I1121 14:29:56.163632  255774 addons.go:530] duration metric: took 551.031985ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:56.399602  255774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-376255" context rescaled to 1 replicas
	W1121 14:29:57.899680  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:29:57.174208  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:59.672116  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:00.114035  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1121 14:29:59.398191  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:01.898360  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:29:59.900344  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.900816  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:04.400331  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.672252  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:30:04.171805  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:05.672011  249617 node_ready.go:49] node "old-k8s-version-012258" is "Ready"
	I1121 14:30:05.672046  249617 node_ready.go:38] duration metric: took 15.003519412s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:30:05.672064  249617 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:05.672125  249617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:05.689799  249617 api_server.go:72] duration metric: took 15.377593574s to wait for apiserver process to appear ...
	I1121 14:30:05.689974  249617 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:05.690001  249617 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:30:05.696217  249617 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:30:05.697950  249617 api_server.go:141] control plane version: v1.28.0
	I1121 14:30:05.697978  249617 api_server.go:131] duration metric: took 7.994891ms to wait for apiserver health ...
	I1121 14:30:05.697990  249617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:05.702726  249617 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:05.702769  249617 system_pods.go:61] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.702778  249617 system_pods.go:61] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.702785  249617 system_pods.go:61] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.702796  249617 system_pods.go:61] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.702808  249617 system_pods.go:61] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.702818  249617 system_pods.go:61] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.702822  249617 system_pods.go:61] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.702829  249617 system_pods.go:61] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.702837  249617 system_pods.go:74] duration metric: took 4.84094ms to wait for pod list to return data ...
	I1121 14:30:05.702852  249617 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:05.705127  249617 default_sa.go:45] found service account: "default"
	I1121 14:30:05.705151  249617 default_sa.go:55] duration metric: took 2.290103ms for default service account to be created ...
	I1121 14:30:05.705161  249617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:05.710235  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.710318  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.710330  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.710337  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.710367  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.710374  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.710380  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.710385  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.710404  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.710597  249617 retry.go:31] will retry after 257.065607ms: missing components: kube-dns
	I1121 14:30:05.972608  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.972648  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.972657  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.972665  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.972676  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.972682  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.972687  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.972692  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.972707  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.972726  249617 retry.go:31] will retry after 339.692313ms: missing components: kube-dns
	I1121 14:30:06.317124  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:06.317155  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Running
	I1121 14:30:06.317160  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:06.317163  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:06.317167  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:06.317171  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:06.317175  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:06.317178  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:06.317181  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Running
	I1121 14:30:06.317188  249617 system_pods.go:126] duration metric: took 612.020803ms to wait for k8s-apps to be running ...
	I1121 14:30:06.317194  249617 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:06.317250  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:06.332295  249617 system_svc.go:56] duration metric: took 15.088564ms WaitForService to wait for kubelet
	I1121 14:30:06.332331  249617 kubeadm.go:587] duration metric: took 16.020134285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:06.332357  249617 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:06.338044  249617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:06.338071  249617 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:06.338084  249617 node_conditions.go:105] duration metric: took 5.72136ms to run NodePressure ...
	I1121 14:30:06.338096  249617 start.go:242] waiting for startup goroutines ...
	I1121 14:30:06.338102  249617 start.go:247] waiting for cluster config update ...
	I1121 14:30:06.338113  249617 start.go:256] writing updated cluster config ...
	I1121 14:30:06.338382  249617 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:06.342534  249617 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:06.347323  249617 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.352062  249617 pod_ready.go:94] pod "coredns-5dd5756b68-vst4c" is "Ready"
	I1121 14:30:06.352087  249617 pod_ready.go:86] duration metric: took 4.697932ms for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.354946  249617 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.359326  249617 pod_ready.go:94] pod "etcd-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.359355  249617 pod_ready.go:86] duration metric: took 4.388182ms for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.362007  249617 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.366060  249617 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.366081  249617 pod_ready.go:86] duration metric: took 4.051984ms for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.368789  249617 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.746914  249617 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.746952  249617 pod_ready.go:86] duration metric: took 378.141903ms for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.947790  249617 pod_ready.go:83] waiting for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.347266  249617 pod_ready.go:94] pod "kube-proxy-wsp2w" is "Ready"
	I1121 14:30:07.347291  249617 pod_ready.go:86] duration metric: took 399.477159ms for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.547233  249617 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946728  249617 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-012258" is "Ready"
	I1121 14:30:07.946756  249617 pod_ready.go:86] duration metric: took 399.500525ms for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946772  249617 pod_ready.go:40] duration metric: took 1.604187461s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.009909  249617 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1121 14:30:08.014607  249617 out.go:203] 
	W1121 14:30:08.016075  249617 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:30:08.020782  249617 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:30:08.022622  249617 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-012258" cluster and "default" namespace by default
	I1121 14:30:05.115052  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1121 14:30:05.115115  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:05.115188  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:05.143819  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.143839  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.143843  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:05.143846  213058 cri.go:89] found id: ""
	I1121 14:30:05.143853  213058 logs.go:282] 3 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:05.143912  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.148585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.152984  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.156944  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:05.157004  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:05.185404  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.185430  213058 cri.go:89] found id: ""
	I1121 14:30:05.185440  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:05.185498  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.190360  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:05.190432  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:05.222964  213058 cri.go:89] found id: ""
	I1121 14:30:05.222989  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.222999  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:05.223006  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:05.223058  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:05.254414  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:05.254436  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:05.254440  213058 cri.go:89] found id: ""
	I1121 14:30:05.254447  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:05.254505  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.258766  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.262456  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:05.262524  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:05.288454  213058 cri.go:89] found id: ""
	I1121 14:30:05.288486  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.288496  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:05.288505  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:05.288598  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:05.317814  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:05.317841  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:05.317847  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.317851  213058 cri.go:89] found id: ""
	I1121 14:30:05.317861  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:05.317930  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.322506  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.326684  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.330828  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:05.330957  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:05.360073  213058 cri.go:89] found id: ""
	I1121 14:30:05.360098  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.360107  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:05.360116  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:05.360171  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:05.388524  213058 cri.go:89] found id: ""
	I1121 14:30:05.388561  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.388573  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:05.388587  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:05.388602  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.427247  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:05.427279  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:05.517583  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:05.517615  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.556205  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:30:05.556238  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.601637  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:05.601692  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.642125  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:05.642167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:05.707252  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:05.707295  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:05.747947  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:05.747990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:05.767646  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:05.767678  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:04.398534  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.897181  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:08.897492  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.900285  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	I1121 14:30:07.400113  255774 node_ready.go:49] node "default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:07.400148  255774 node_ready.go:38] duration metric: took 11.503726167s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:30:07.400166  255774 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:07.400227  255774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:07.416428  255774 api_server.go:72] duration metric: took 11.804040955s to wait for apiserver process to appear ...
	I1121 14:30:07.416462  255774 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:07.416487  255774 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 14:30:07.423355  255774 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 14:30:07.424441  255774 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:07.424471  255774 api_server.go:131] duration metric: took 8.001103ms to wait for apiserver health ...
	I1121 14:30:07.424480  255774 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:07.428816  255774 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:07.428856  255774 system_pods.go:61] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.428866  255774 system_pods.go:61] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.428874  255774 system_pods.go:61] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.428880  255774 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.428886  255774 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.428891  255774 system_pods.go:61] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.428899  255774 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.428912  255774 system_pods.go:61] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.428921  255774 system_pods.go:74] duration metric: took 4.433771ms to wait for pod list to return data ...
	I1121 14:30:07.428932  255774 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:07.431771  255774 default_sa.go:45] found service account: "default"
	I1121 14:30:07.431794  255774 default_sa.go:55] duration metric: took 2.856811ms for default service account to be created ...
	I1121 14:30:07.431804  255774 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:07.435787  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.435816  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.435821  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.435826  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.435830  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.435833  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.435836  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.435841  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.435846  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.435871  255774 retry.go:31] will retry after 217.060579ms: missing components: kube-dns
	I1121 14:30:07.656900  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.656930  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.656937  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.656945  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.656950  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.656955  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.656959  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.656964  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.656970  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.656989  255774 retry.go:31] will retry after 330.648304ms: missing components: kube-dns
	I1121 14:30:07.995514  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.995612  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.995626  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.995636  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.995642  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.995653  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.995659  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.995664  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.995683  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.995713  255774 retry.go:31] will retry after 466.383408ms: missing components: kube-dns
	I1121 14:30:08.466385  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:08.466414  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Running
	I1121 14:30:08.466419  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:08.466423  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:08.466427  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:08.466430  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:08.466435  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:08.466438  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:08.466441  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Running
	I1121 14:30:08.466448  255774 system_pods.go:126] duration metric: took 1.034639333s to wait for k8s-apps to be running ...
	I1121 14:30:08.466454  255774 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:08.466495  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:08.480058  255774 system_svc.go:56] duration metric: took 13.59071ms WaitForService to wait for kubelet
	I1121 14:30:08.480087  255774 kubeadm.go:587] duration metric: took 12.867708638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:08.480104  255774 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:08.483054  255774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:08.483077  255774 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:08.483089  255774 node_conditions.go:105] duration metric: took 2.980591ms to run NodePressure ...
	I1121 14:30:08.483101  255774 start.go:242] waiting for startup goroutines ...
	I1121 14:30:08.483107  255774 start.go:247] waiting for cluster config update ...
	I1121 14:30:08.483116  255774 start.go:256] writing updated cluster config ...
	I1121 14:30:08.483378  255774 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:08.487457  255774 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.490869  255774 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.495613  255774 pod_ready.go:94] pod "coredns-66bc5c9577-fr27b" is "Ready"
	I1121 14:30:08.495638  255774 pod_ready.go:86] duration metric: took 4.745112ms for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.498070  255774 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.502098  255774 pod_ready.go:94] pod "etcd-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.502122  255774 pod_ready.go:86] duration metric: took 4.029361ms for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.504276  255774 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.508229  255774 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.508250  255774 pod_ready.go:86] duration metric: took 3.957821ms for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.510387  255774 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.891344  255774 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.891369  255774 pod_ready.go:86] duration metric: took 380.959206ms for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.091636  255774 pod_ready.go:83] waiting for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.492078  255774 pod_ready.go:94] pod "kube-proxy-hdplf" is "Ready"
	I1121 14:30:09.492108  255774 pod_ready.go:86] duration metric: took 400.444722ms for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.693278  255774 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092105  255774 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:10.092133  255774 pod_ready.go:86] duration metric: took 398.824976ms for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092146  255774 pod_ready.go:40] duration metric: took 1.604655578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:10.138628  255774 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:10.140593  255774 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-376255" cluster and "default" namespace by default
	I1121 14:30:08.754284  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.986586875s)
	W1121 14:30:08.754342  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1121 14:30:08.754352  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:08.754366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:08.789119  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:08.789149  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:08.842933  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:08.842974  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:08.880878  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:08.880919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:08.910920  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:08.910953  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.440020  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:11.440496  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:11.440556  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:11.440601  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:11.472645  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:11.472669  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:11.472674  213058 cri.go:89] found id: ""
	I1121 14:30:11.472683  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:11.472748  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.478061  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.482946  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:11.483034  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:11.517693  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:11.517722  213058 cri.go:89] found id: ""
	I1121 14:30:11.517732  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:11.517797  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.523621  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:11.523699  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:11.559155  213058 cri.go:89] found id: ""
	I1121 14:30:11.559194  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.559204  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:11.559212  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:11.559271  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:11.595093  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.595127  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:11.595133  213058 cri.go:89] found id: ""
	I1121 14:30:11.595143  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:11.595194  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.600085  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.604973  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:11.605048  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:11.639606  213058 cri.go:89] found id: ""
	I1121 14:30:11.639636  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.639647  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:11.639653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:11.639713  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:11.684373  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.684400  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.684405  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.684410  213058 cri.go:89] found id: ""
	I1121 14:30:11.684421  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:11.684482  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.689732  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.695253  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.701315  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:11.701388  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:11.732802  213058 cri.go:89] found id: ""
	I1121 14:30:11.732831  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.732841  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:11.732848  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:11.732907  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:11.761686  213058 cri.go:89] found id: ""
	I1121 14:30:11.761717  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.761729  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:11.761741  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:11.761756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.816634  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:11.816670  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.846024  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:11.846055  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.876932  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:11.876964  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.912984  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:11.913018  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:11.965381  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:11.965423  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:11.997477  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:11.997509  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:12.011497  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:12.011524  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:12.071024  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:12.071049  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:12.071065  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:12.106865  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:12.106898  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:12.141245  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:12.141276  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:12.176551  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:12.176600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:12.268742  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:12.268780  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	W1121 14:30:10.897620  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	I1121 14:30:11.398100  252125 node_ready.go:49] node "no-preload-921956" is "Ready"
	I1121 14:30:11.398128  252125 node_ready.go:38] duration metric: took 14.003530083s for node "no-preload-921956" to be "Ready" ...
	I1121 14:30:11.398142  252125 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:11.398195  252125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:11.412043  252125 api_server.go:72] duration metric: took 14.35241025s to wait for apiserver process to appear ...
	I1121 14:30:11.412070  252125 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:11.412087  252125 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:30:11.417254  252125 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:30:11.418517  252125 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:11.418570  252125 api_server.go:131] duration metric: took 6.492303ms to wait for apiserver health ...
	I1121 14:30:11.418581  252125 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:11.421927  252125 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:11.422024  252125 system_pods.go:61] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.422034  252125 system_pods.go:61] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.422047  252125 system_pods.go:61] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.422059  252125 system_pods.go:61] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.422069  252125 system_pods.go:61] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.422073  252125 system_pods.go:61] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.422077  252125 system_pods.go:61] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.422082  252125 system_pods.go:61] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.422094  252125 system_pods.go:74] duration metric: took 3.505153ms to wait for pod list to return data ...
	I1121 14:30:11.422109  252125 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:11.424685  252125 default_sa.go:45] found service account: "default"
	I1121 14:30:11.424710  252125 default_sa.go:55] duration metric: took 2.591611ms for default service account to be created ...
	I1121 14:30:11.424722  252125 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:11.427627  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.427680  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.427689  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.427703  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.427713  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.427721  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.427726  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.427731  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.427737  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.427768  252125 retry.go:31] will retry after 234.428318ms: missing components: kube-dns
	I1121 14:30:11.669788  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.669831  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.669840  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.669850  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.669858  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.669865  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.669871  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.669877  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.669893  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.669919  252125 retry.go:31] will retry after 250.085803ms: missing components: kube-dns
	I1121 14:30:11.924517  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.924602  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.924614  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.924627  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.924633  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.924642  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.924647  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.924653  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.924661  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.924682  252125 retry.go:31] will retry after 441.862758ms: missing components: kube-dns
	I1121 14:30:12.371065  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.371110  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:12.371122  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.371131  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.371136  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.371142  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.371147  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.371158  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.371170  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:12.371189  252125 retry.go:31] will retry after 502.578888ms: missing components: kube-dns
	I1121 14:30:12.879209  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.879243  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Running
	I1121 14:30:12.879249  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.879253  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.879258  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.879268  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.879271  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.879275  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.879278  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Running
	I1121 14:30:12.879289  252125 system_pods.go:126] duration metric: took 1.454561179s to wait for k8s-apps to be running ...
	I1121 14:30:12.879301  252125 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:12.879351  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:12.894061  252125 system_svc.go:56] duration metric: took 14.74714ms WaitForService to wait for kubelet
	I1121 14:30:12.894092  252125 kubeadm.go:587] duration metric: took 15.834465857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:12.894115  252125 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:12.897599  252125 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:12.897630  252125 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:12.897641  252125 node_conditions.go:105] duration metric: took 3.520753ms to run NodePressure ...
	I1121 14:30:12.897652  252125 start.go:242] waiting for startup goroutines ...
	I1121 14:30:12.897659  252125 start.go:247] waiting for cluster config update ...
	I1121 14:30:12.897669  252125 start.go:256] writing updated cluster config ...
	I1121 14:30:12.897983  252125 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:12.902897  252125 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:12.906562  252125 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.912263  252125 pod_ready.go:94] pod "coredns-66bc5c9577-s4rzb" is "Ready"
	I1121 14:30:12.912286  252125 pod_ready.go:86] duration metric: took 5.702456ms for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.915190  252125 pod_ready.go:83] waiting for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.919870  252125 pod_ready.go:94] pod "etcd-no-preload-921956" is "Ready"
	I1121 14:30:12.919896  252125 pod_ready.go:86] duration metric: took 4.68423ms for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.921926  252125 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.925984  252125 pod_ready.go:94] pod "kube-apiserver-no-preload-921956" is "Ready"
	I1121 14:30:12.926012  252125 pod_ready.go:86] duration metric: took 4.065762ms for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.928283  252125 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.307608  252125 pod_ready.go:94] pod "kube-controller-manager-no-preload-921956" is "Ready"
	I1121 14:30:13.307639  252125 pod_ready.go:86] duration metric: took 379.335151ms for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.508229  252125 pod_ready.go:83] waiting for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.907070  252125 pod_ready.go:94] pod "kube-proxy-wmx7z" is "Ready"
	I1121 14:30:13.907101  252125 pod_ready.go:86] duration metric: took 398.843128ms for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.108040  252125 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507264  252125 pod_ready.go:94] pod "kube-scheduler-no-preload-921956" is "Ready"
	I1121 14:30:14.507293  252125 pod_ready.go:86] duration metric: took 399.219492ms for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507307  252125 pod_ready.go:40] duration metric: took 1.604362709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:14.554506  252125 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:14.556366  252125 out.go:179] * Done! kubectl is now configured to use "no-preload-921956" cluster and "default" namespace by default
	I1121 14:30:14.802507  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:14.803048  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:14.803100  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:14.803156  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:14.832438  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:14.832464  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:14.832469  213058 cri.go:89] found id: ""
	I1121 14:30:14.832479  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:14.832560  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.836869  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.840970  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:14.841027  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:14.869276  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:14.869297  213058 cri.go:89] found id: ""
	I1121 14:30:14.869306  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:14.869364  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.873530  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:14.873616  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:14.902293  213058 cri.go:89] found id: ""
	I1121 14:30:14.902325  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.902336  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:14.902343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:14.902396  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:14.931422  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:14.931444  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:14.931448  213058 cri.go:89] found id: ""
	I1121 14:30:14.931455  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:14.931507  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.936188  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.940673  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:14.940742  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:14.969277  213058 cri.go:89] found id: ""
	I1121 14:30:14.969308  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.969320  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:14.969328  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:14.969386  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:14.999162  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:14.999190  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:14.999195  213058 cri.go:89] found id: ""
	I1121 14:30:14.999209  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:14.999275  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.003627  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.008044  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:15.008149  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:15.036025  213058 cri.go:89] found id: ""
	I1121 14:30:15.036050  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.036061  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:15.036069  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:15.036123  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:15.064814  213058 cri.go:89] found id: ""
	I1121 14:30:15.064840  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.064851  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:15.064863  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:15.064877  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:15.105369  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:15.105412  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:15.145479  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:15.145521  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:15.186460  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:15.186498  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:15.233156  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:15.233196  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:15.328776  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:15.328824  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:15.343510  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:15.343556  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:15.375919  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:15.375959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:15.412267  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:15.412310  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:15.467388  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:15.467422  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:15.495400  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:15.495451  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:15.527880  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:15.527906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:15.589380  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.090626  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:18.091055  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:18.091106  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:18.091154  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:18.119750  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.119777  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:18.119781  213058 cri.go:89] found id: ""
	I1121 14:30:18.119788  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:18.119846  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.124441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.128481  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:18.128574  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:18.155968  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:18.155990  213058 cri.go:89] found id: ""
	I1121 14:30:18.156000  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:18.156056  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.160457  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:18.160529  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:18.191869  213058 cri.go:89] found id: ""
	I1121 14:30:18.191899  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.191909  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:18.191916  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:18.191990  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:18.222614  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:18.222639  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.222644  213058 cri.go:89] found id: ""
	I1121 14:30:18.222653  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:18.222710  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.227248  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.231976  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:18.232054  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:18.261651  213058 cri.go:89] found id: ""
	I1121 14:30:18.261686  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.261696  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:18.261703  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:18.261756  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:18.293248  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:18.293277  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:18.293283  213058 cri.go:89] found id: ""
	I1121 14:30:18.293291  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:18.293360  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.297988  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.302375  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:18.302444  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:18.331900  213058 cri.go:89] found id: ""
	I1121 14:30:18.331976  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.331989  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:18.331997  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:18.332053  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:18.362314  213058 cri.go:89] found id: ""
	I1121 14:30:18.362341  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.362351  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:18.362363  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:18.362378  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.401362  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:18.401403  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:18.453554  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:18.453597  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:18.470719  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:18.470750  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:18.535220  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.535241  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:18.535255  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.572460  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:18.572490  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b902d4d95366e       56cc512116c8f       9 seconds ago       Running             busybox                   0                   650f980a2b9de       busybox                                          default
	4cd21f3197431       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   23e45253f8c7e       storage-provisioner                              kube-system
	5c05a4ce99693       ead0a4a53df89       15 seconds ago      Running             coredns                   0                   4a38fce5ce541       coredns-5dd5756b68-vst4c                         kube-system
	14f62b42937d6       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   2189620d082f5       kindnet-f6t7s                                    kube-system
	7b9fdeac6c297       ea1030da44aa1       29 seconds ago      Running             kube-proxy                0                   7e0d6db9e6b3d       kube-proxy-wsp2w                                 kube-system
	2ff2d15ad456d       f6f496300a2ae       48 seconds ago      Running             kube-scheduler            0                   a2abbb0781499       kube-scheduler-old-k8s-version-012258            kube-system
	bff5755d3bb4c       bb5e0dde9054c       48 seconds ago      Running             kube-apiserver            0                   0f35f911732de       kube-apiserver-old-k8s-version-012258            kube-system
	24c3a525c2057       73deb9a3f7025       48 seconds ago      Running             etcd                      0                   11bd8f3a7d6a7       etcd-old-k8s-version-012258                      kube-system
	9694941d50234       4be79c38a4bab       48 seconds ago      Running             kube-controller-manager   0                   45f5f9128f983       kube-controller-manager-old-k8s-version-012258   kube-system
	
	
	==> containerd <==
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.663617176Z" level=info msg="StartContainer for \"5c05a4ce996931fe774ecca66b33620ebb8a09a835d63b1f0ddd04105345bb76\""
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.663619446Z" level=info msg="Container 4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.664751507Z" level=info msg="connecting to shim 5c05a4ce996931fe774ecca66b33620ebb8a09a835d63b1f0ddd04105345bb76" address="unix:///run/containerd/s/0b88234bafabade7aa89e6626d296420e30066b3991abfec21350310268aa8a7" protocol=ttrpc version=3
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.672254961Z" level=info msg="CreateContainer within sandbox \"23e45253f8c7ee6d14427e06305531cf9d976c8c976bd1a48cedecbea7976313\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c\""
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.673493529Z" level=info msg="StartContainer for \"4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c\""
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.674511601Z" level=info msg="connecting to shim 4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c" address="unix:///run/containerd/s/a82bd5a517bceb0823436c092fd804897bb31601e146a9022325dd22f0adc41d" protocol=ttrpc version=3
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.728082486Z" level=info msg="StartContainer for \"4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c\" returns successfully"
	Nov 21 14:30:05 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:05.728959460Z" level=info msg="StartContainer for \"5c05a4ce996931fe774ecca66b33620ebb8a09a835d63b1f0ddd04105345bb76\" returns successfully"
	Nov 21 14:30:08 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:08.528101810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:fa895e52-0bff-4604-8b62-fd0f087015e8,Namespace:default,Attempt:0,}"
	Nov 21 14:30:08 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:08.569589613Z" level=info msg="connecting to shim 650f980a2b9de14dfd5f63378bb97f102c6ac2132a9ada4c16a5ef068e7d2a2c" address="unix:///run/containerd/s/5e291cbce6d45d78977b32eb821eca28abc28581b57d5fa47a45bc5da629cfec" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:30:08 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:08.641364674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:fa895e52-0bff-4604-8b62-fd0f087015e8,Namespace:default,Attempt:0,} returns sandbox id \"650f980a2b9de14dfd5f63378bb97f102c6ac2132a9ada4c16a5ef068e7d2a2c\""
	Nov 21 14:30:08 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:08.643152152Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.895297688Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.896188926Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.897638365Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.900612481Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.901224670Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.258026607s"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.901267593Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.903245567Z" level=info msg="CreateContainer within sandbox \"650f980a2b9de14dfd5f63378bb97f102c6ac2132a9ada4c16a5ef068e7d2a2c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.911518272Z" level=info msg="Container b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.918169889Z" level=info msg="CreateContainer within sandbox \"650f980a2b9de14dfd5f63378bb97f102c6ac2132a9ada4c16a5ef068e7d2a2c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b\""
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.918839732Z" level=info msg="StartContainer for \"b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b\""
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.919846340Z" level=info msg="connecting to shim b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b" address="unix:///run/containerd/s/5e291cbce6d45d78977b32eb821eca28abc28581b57d5fa47a45bc5da629cfec" protocol=ttrpc version=3
	Nov 21 14:30:10 old-k8s-version-012258 containerd[665]: time="2025-11-21T14:30:10.971722510Z" level=info msg="StartContainer for \"b902d4d95366e27e951b3537262d21dd82f809e7ad84dd34083f4c621ca4b23b\" returns successfully"
	Nov 21 14:30:17 old-k8s-version-012258 containerd[665]: E1121 14:30:17.320736     665 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [5c05a4ce996931fe774ecca66b33620ebb8a09a835d63b1f0ddd04105345bb76] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46429 - 55004 "HINFO IN 8589807954474471726.703758692042272696. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027956792s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-012258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-012258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-012258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_29_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:29:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-012258
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:30:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:30:07 +0000   Fri, 21 Nov 2025 14:29:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:30:07 +0000   Fri, 21 Nov 2025 14:29:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:30:07 +0000   Fri, 21 Nov 2025 14:29:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:30:07 +0000   Fri, 21 Nov 2025 14:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-012258
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                b90c39b5-fac8-48f3-bfec-9ba818fb6bc5
	  Boot ID:                    f900700b-0668-4d24-87ff-85e15fbda365
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-vst4c                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-012258                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         45s
	  kube-system                 kindnet-f6t7s                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-012258             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-012258    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-wsp2w                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-012258             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 50s)  kubelet          Node old-k8s-version-012258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 50s)  kubelet          Node old-k8s-version-012258 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x7 over 50s)  kubelet          Node old-k8s-version-012258 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-012258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-012258 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-012258 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-012258 event: Registered Node old-k8s-version-012258 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-012258 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001887] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.440508] i8042: Warning: Keylock active
	[  +0.011202] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526419] block sda: the capability attribute has been deprecated.
	[  +0.095215] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027093] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.485024] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [24c3a525c2057be14d63a0b83d320542988e06c148db3abcea70288b84ad9d55] <==
	{"level":"info","ts":"2025-11-21T14:29:32.241252Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:29:32.243038Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-21T14:29:32.243254Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:29:32.243303Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:29:32.24334Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:29:32.24338Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:29:32.527604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-21T14:29:32.527651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-21T14:29:32.527692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-21T14:29:32.527708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:29:32.527717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-21T14:29:32.527728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-21T14:29:32.527737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-21T14:29:32.529559Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-012258 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:29:32.529578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:29:32.529669Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:29:32.529972Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:29:32.529994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-21T14:29:32.529757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:29:32.5309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-21T14:29:32.531625Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:29:32.53516Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:29:32.535207Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:29:32.536282Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-21T14:29:35.645599Z","caller":"traceutil/trace.go:171","msg":"trace[1619369888] transaction","detail":"{read_only:false; response_revision:181; number_of_response:1; }","duration":"103.859179ms","start":"2025-11-21T14:29:35.541719Z","end":"2025-11-21T14:29:35.645578Z","steps":["trace[1619369888] 'process raft request'  (duration: 101.685301ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:30:20 up  1:12,  0 user,  load average: 4.09, 3.08, 1.94
	Linux old-k8s-version-012258 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [14f62b42937d63a9d982189e10059fb863ccdf5ca3eedc2cdab43a2e258708b6] <==
	I1121 14:29:54.836873       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:29:54.837124       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1121 14:29:54.837288       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:29:54.837307       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:29:54.837325       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:29:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:29:55.132056       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:29:55.132129       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:29:55.132143       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:29:55.132319       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:29:55.432449       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:29:55.432473       1 metrics.go:72] Registering metrics
	I1121 14:29:55.432525       1 controller.go:711] "Syncing nftables rules"
	I1121 14:30:05.138150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:30:05.138210       1 main.go:301] handling current node
	I1121 14:30:15.134126       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:30:15.134169       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bff5755d3bb4c01170cea10eea2a0bd7eb5e4e85eff679e4fd11f262f20d8b28] <==
	I1121 14:29:34.045351       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1121 14:29:34.047124       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:29:34.047217       1 shared_informer.go:318] Caches are synced for configmaps
	I1121 14:29:34.051166       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:29:34.059678       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:29:34.059713       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:29:34.059721       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:29:34.059728       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:29:34.059737       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:29:34.239983       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:29:34.956388       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:29:34.961744       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:29:34.961779       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:29:35.529678       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:29:35.676651       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:29:35.776358       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:29:35.783426       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1121 14:29:35.785070       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:29:35.792737       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:29:35.992086       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:29:37.085397       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:29:37.099935       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:29:37.111942       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1121 14:29:50.620131       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1121 14:29:50.819999       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9694941d5023471382cb75dbe0e35927477b046c67f0406d94b0c2eab9737245] <==
	I1121 14:29:49.846641       1 shared_informer.go:318] Caches are synced for disruption
	I1121 14:29:49.855897       1 shared_informer.go:318] Caches are synced for stateful set
	I1121 14:29:49.881551       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1121 14:29:49.969509       1 shared_informer.go:318] Caches are synced for attach detach
	I1121 14:29:50.014167       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:29:50.025976       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:29:50.366198       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:29:50.366669       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:29:50.381693       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:29:50.624660       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1121 14:29:50.704235       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1121 14:29:50.830312       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wsp2w"
	I1121 14:29:50.831838       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-f6t7s"
	I1121 14:29:50.927521       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vst4c"
	I1121 14:29:50.936234       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qv6fz"
	I1121 14:29:50.964100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="339.351723ms"
	I1121 14:29:50.978176       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qv6fz"
	I1121 14:29:50.986743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.057827ms"
	I1121 14:29:50.996010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.927032ms"
	I1121 14:29:50.996568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="343.659µs"
	I1121 14:30:05.215933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.246µs"
	I1121 14:30:05.230917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="161.827µs"
	I1121 14:30:06.296502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.965394ms"
	I1121 14:30:06.296638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.82µs"
	I1121 14:30:09.770369       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [7b9fdeac6c297da9e16ba05abceeee4a77258137fd28986a17f946713c8ad0fe] <==
	I1121 14:29:51.457956       1 server_others.go:69] "Using iptables proxy"
	I1121 14:29:51.467641       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1121 14:29:51.489328       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:29:51.492051       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:29:51.492086       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:29:51.492094       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:29:51.492128       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:29:51.492424       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:29:51.492443       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:29:51.493149       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:29:51.493193       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:29:51.493154       1 config.go:188] "Starting service config controller"
	I1121 14:29:51.493237       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:29:51.493237       1 config.go:315] "Starting node config controller"
	I1121 14:29:51.493252       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:29:51.593782       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:29:51.593822       1 shared_informer.go:318] Caches are synced for node config
	I1121 14:29:51.593799       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2ff2d15ad456d7eabe7dc6efd47603a67afa696fd1091b577b9633b6669bd9ec] <==
	W1121 14:29:34.007803       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1121 14:29:34.007838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1121 14:29:34.007899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1121 14:29:34.007919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1121 14:29:34.904012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1121 14:29:34.904113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1121 14:29:34.906819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1121 14:29:34.906855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1121 14:29:34.982047       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1121 14:29:34.982173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1121 14:29:35.046771       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1121 14:29:35.046802       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:29:35.065222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1121 14:29:35.065262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:29:35.119288       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1121 14:29:35.119329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1121 14:29:35.148021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1121 14:29:35.148079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1121 14:29:35.156816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1121 14:29:35.156866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1121 14:29:35.323566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1121 14:29:35.323609       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1121 14:29:35.347343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1121 14:29:35.347400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1121 14:29:38.002740       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:29:49 old-k8s-version-012258 kubelet[1516]: I1121 14:29:49.923571    1516 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.836162    1516 topology_manager.go:215] "Topology Admit Handler" podUID="bc079c02-40ff-4f10-947b-76f1e9784572" podNamespace="kube-system" podName="kube-proxy-wsp2w"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.839382    1516 topology_manager.go:215] "Topology Admit Handler" podUID="bd28a6b5-0214-42be-8883-1adf1217761c" podNamespace="kube-system" podName="kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.946858    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc079c02-40ff-4f10-947b-76f1e9784572-xtables-lock\") pod \"kube-proxy-wsp2w\" (UID: \"bc079c02-40ff-4f10-947b-76f1e9784572\") " pod="kube-system/kube-proxy-wsp2w"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.948665    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bd28a6b5-0214-42be-8883-1adf1217761c-cni-cfg\") pod \"kindnet-f6t7s\" (UID: \"bd28a6b5-0214-42be-8883-1adf1217761c\") " pod="kube-system/kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.949046    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd28a6b5-0214-42be-8883-1adf1217761c-xtables-lock\") pod \"kindnet-f6t7s\" (UID: \"bd28a6b5-0214-42be-8883-1adf1217761c\") " pod="kube-system/kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.949101    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgrts\" (UniqueName: \"kubernetes.io/projected/bc079c02-40ff-4f10-947b-76f1e9784572-kube-api-access-vgrts\") pod \"kube-proxy-wsp2w\" (UID: \"bc079c02-40ff-4f10-947b-76f1e9784572\") " pod="kube-system/kube-proxy-wsp2w"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.950051    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd28a6b5-0214-42be-8883-1adf1217761c-lib-modules\") pod \"kindnet-f6t7s\" (UID: \"bd28a6b5-0214-42be-8883-1adf1217761c\") " pod="kube-system/kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.950176    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcpxl\" (UniqueName: \"kubernetes.io/projected/bd28a6b5-0214-42be-8883-1adf1217761c-kube-api-access-jcpxl\") pod \"kindnet-f6t7s\" (UID: \"bd28a6b5-0214-42be-8883-1adf1217761c\") " pod="kube-system/kindnet-f6t7s"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.950220    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bc079c02-40ff-4f10-947b-76f1e9784572-kube-proxy\") pod \"kube-proxy-wsp2w\" (UID: \"bc079c02-40ff-4f10-947b-76f1e9784572\") " pod="kube-system/kube-proxy-wsp2w"
	Nov 21 14:29:50 old-k8s-version-012258 kubelet[1516]: I1121 14:29:50.950255    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc079c02-40ff-4f10-947b-76f1e9784572-lib-modules\") pod \"kube-proxy-wsp2w\" (UID: \"bc079c02-40ff-4f10-947b-76f1e9784572\") " pod="kube-system/kube-proxy-wsp2w"
	Nov 21 14:29:55 old-k8s-version-012258 kubelet[1516]: I1121 14:29:55.257777    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wsp2w" podStartSLOduration=5.257722111 podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.247909441 +0000 UTC m=+15.198590387" watchObservedRunningTime="2025-11-21 14:29:55.257722111 +0000 UTC m=+18.208403071"
	Nov 21 14:29:55 old-k8s-version-012258 kubelet[1516]: I1121 14:29:55.257917    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-f6t7s" podStartSLOduration=2.158617096 podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="2025-11-21 14:29:51.458699826 +0000 UTC m=+14.409380763" lastFinishedPulling="2025-11-21 14:29:54.557970689 +0000 UTC m=+17.508651626" observedRunningTime="2025-11-21 14:29:55.257276178 +0000 UTC m=+18.207957124" watchObservedRunningTime="2025-11-21 14:29:55.257887959 +0000 UTC m=+18.208568906"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.191422    1516 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.216103    1516 topology_manager.go:215] "Topology Admit Handler" podUID="3ca4df79-d875-498c-91b8-059d4f975bd0" podNamespace="kube-system" podName="coredns-5dd5756b68-vst4c"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.222388    1516 topology_manager.go:215] "Topology Admit Handler" podUID="4195d236-52f6-4bfd-b47a-9cd7cd89bedd" podNamespace="kube-system" podName="storage-provisioner"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.242068    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp9f\" (UniqueName: \"kubernetes.io/projected/3ca4df79-d875-498c-91b8-059d4f975bd0-kube-api-access-2cp9f\") pod \"coredns-5dd5756b68-vst4c\" (UID: \"3ca4df79-d875-498c-91b8-059d4f975bd0\") " pod="kube-system/coredns-5dd5756b68-vst4c"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.242125    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69bsj\" (UniqueName: \"kubernetes.io/projected/4195d236-52f6-4bfd-b47a-9cd7cd89bedd-kube-api-access-69bsj\") pod \"storage-provisioner\" (UID: \"4195d236-52f6-4bfd-b47a-9cd7cd89bedd\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.242163    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ca4df79-d875-498c-91b8-059d4f975bd0-config-volume\") pod \"coredns-5dd5756b68-vst4c\" (UID: \"3ca4df79-d875-498c-91b8-059d4f975bd0\") " pod="kube-system/coredns-5dd5756b68-vst4c"
	Nov 21 14:30:05 old-k8s-version-012258 kubelet[1516]: I1121 14:30:05.242194    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4195d236-52f6-4bfd-b47a-9cd7cd89bedd-tmp\") pod \"storage-provisioner\" (UID: \"4195d236-52f6-4bfd-b47a-9cd7cd89bedd\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:06 old-k8s-version-012258 kubelet[1516]: I1121 14:30:06.278995    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.278943202 podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:06.278908115 +0000 UTC m=+29.229589065" watchObservedRunningTime="2025-11-21 14:30:06.278943202 +0000 UTC m=+29.229624148"
	Nov 21 14:30:06 old-k8s-version-012258 kubelet[1516]: I1121 14:30:06.289341    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vst4c" podStartSLOduration=16.289291859 podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:06.28907602 +0000 UTC m=+29.239756965" watchObservedRunningTime="2025-11-21 14:30:06.289291859 +0000 UTC m=+29.239972805"
	Nov 21 14:30:08 old-k8s-version-012258 kubelet[1516]: I1121 14:30:08.218808    1516 topology_manager.go:215] "Topology Admit Handler" podUID="fa895e52-0bff-4604-8b62-fd0f087015e8" podNamespace="default" podName="busybox"
	Nov 21 14:30:08 old-k8s-version-012258 kubelet[1516]: I1121 14:30:08.263005    1516 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbpfl\" (UniqueName: \"kubernetes.io/projected/fa895e52-0bff-4604-8b62-fd0f087015e8-kube-api-access-cbpfl\") pod \"busybox\" (UID: \"fa895e52-0bff-4604-8b62-fd0f087015e8\") " pod="default/busybox"
	Nov 21 14:30:11 old-k8s-version-012258 kubelet[1516]: I1121 14:30:11.294015    1516 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.035211506 podCreationTimestamp="2025-11-21 14:30:08 +0000 UTC" firstStartedPulling="2025-11-21 14:30:08.642848367 +0000 UTC m=+31.593529296" lastFinishedPulling="2025-11-21 14:30:10.901611757 +0000 UTC m=+33.852292703" observedRunningTime="2025-11-21 14:30:11.293488867 +0000 UTC m=+34.244169813" watchObservedRunningTime="2025-11-21 14:30:11.293974913 +0000 UTC m=+34.244655858"
	
	
	==> storage-provisioner [4cd21f31974314e5db6d58ee50bbd67f0daf675c91355ac568f2d0140f7a8d6c] <==
	I1121 14:30:05.736193       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:30:05.746379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:30:05.746443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1121 14:30:05.754349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:30:05.754427       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ece7dbe-e611-46b3-879d-c0179ba2fde1", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-012258_d783fa48-77b0-4408-a80f-68458be19abb became leader
	I1121 14:30:05.754523       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-012258_d783fa48-77b0-4408-a80f-68458be19abb!
	I1121 14:30:05.855459       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-012258_d783fa48-77b0-4408-a80f-68458be19abb!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012258 -n old-k8s-version-012258
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-012258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-376255 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e6d82a47-2d60-4b9a-8e47-37d867b92b64] Pending
helpers_test.go:352: "busybox" [e6d82a47-2d60-4b9a-8e47-37d867b92b64] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e6d82a47-2d60-4b9a-8e47-37d867b92b64] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004365918s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-376255 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-376255
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-376255:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6",
	        "Created": "2025-11-21T14:29:32.009081088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257784,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:29:32.068439596Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6/hostname",
	        "HostsPath": "/var/lib/docker/containers/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6/hosts",
	        "LogPath": "/var/lib/docker/containers/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6-json.log",
	        "Name": "/default-k8s-diff-port-376255",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-376255:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-376255",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6",
	                "LowerDir": "/var/lib/docker/overlay2/d47e2ba9d0651c4ea883e5bf100c225e4b05e3e5505fc143f634d6ecb551fb9e-init/diff:/var/lib/docker/overlay2/a649757dd9587fa5a20ca8a56ec1923099f2a5e912dc7e8e1dfa08e79248b59f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d47e2ba9d0651c4ea883e5bf100c225e4b05e3e5505fc143f634d6ecb551fb9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d47e2ba9d0651c4ea883e5bf100c225e4b05e3e5505fc143f634d6ecb551fb9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d47e2ba9d0651c4ea883e5bf100c225e4b05e3e5505fc143f634d6ecb551fb9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-376255",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-376255/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-376255",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-376255",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-376255",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0a24621d720643b3fcc29e1e4e073681c8649e0d7d5f8233994b273a41233ead",
	            "SandboxKey": "/var/run/docker/netns/0a24621d7206",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-376255": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "25d9d9bd67c8277f14a165b0389b03608121b262dc0482f5f0c6cce668c1cfe5",
	                    "EndpointID": "99e8c973752335e26b21d966b72adfcdadf31879bb82aa32ab6520519ebe814c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "4e:7c:cf:18:0f:23",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-376255",
	                        "61c87ca973c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-376255 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-376255 logs -n 25: (1.237766077s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-459127 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo containerd config dump                                                                                                                                                                                                        │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cert-expiration-371956                                                                                                                                                                                                                           │ cert-expiration-371956       │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ -p cilium-459127 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo crio config                                                                                                                                                                                                                   │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cilium-459127                                                                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ start   │ -p cert-options-733993 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p force-systemd-flag-730471 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p NoKubernetes-187733 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │                     │
	│ delete  │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ cert-options-733993 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p cert-options-733993 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p cert-options-733993                                                                                                                                                                                                                              │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ force-systemd-flag-730471 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p force-systemd-flag-730471                                                                                                                                                                                                                        │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:29:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:29:24.877938  255774 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:29:24.878133  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.878179  255774 out.go:374] Setting ErrFile to fd 2...
	I1121 14:29:24.878200  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.879901  255774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:29:24.881344  255774 out.go:368] Setting JSON to false
	I1121 14:29:24.883254  255774 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4307,"bootTime":1763731058,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:29:24.883372  255774 start.go:143] virtualization: kvm guest
	I1121 14:29:24.885483  255774 out.go:179] * [default-k8s-diff-port-376255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:29:24.887201  255774 notify.go:221] Checking for updates...
	I1121 14:29:24.887242  255774 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:29:24.890729  255774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:29:24.892963  255774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:24.894677  255774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:29:24.897870  255774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:29:24.899765  255774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:29:24.902854  255774 config.go:182] Loaded profile config "kubernetes-upgrade-797080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903030  255774 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903162  255774 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:24.903312  255774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:29:24.939143  255774 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:29:24.939248  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.025144  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.01035373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.025295  255774 docker.go:319] overlay module found
	I1121 14:29:25.027378  255774 out.go:179] * Using the docker driver based on user configuration
	I1121 14:29:22.611340  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.611365  249617 ubuntu.go:182] provisioning hostname "old-k8s-version-012258"
	I1121 14:29:22.611426  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.635589  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.635869  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.635891  249617 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-012258 && echo "old-k8s-version-012258" | sudo tee /etc/hostname
	I1121 14:29:22.796661  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.796754  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.822578  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.822834  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.822860  249617 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-012258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-012258/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-012258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:22.970644  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:22.970676  249617 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:22.970732  249617 ubuntu.go:190] setting up certificates
	I1121 14:29:22.970743  249617 provision.go:84] configureAuth start
	I1121 14:29:22.970826  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:22.991118  249617 provision.go:143] copyHostCerts
	I1121 14:29:22.991183  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:22.991193  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:22.991250  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:22.991367  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:22.991381  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:22.991414  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:22.991488  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:22.991499  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:22.991526  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:22.991627  249617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-012258 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-012258]
	I1121 14:29:23.140756  249617 provision.go:177] copyRemoteCerts
	I1121 14:29:23.140833  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:23.140885  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.161751  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.269718  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:23.292619  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:29:23.314336  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:29:23.337086  249617 provision.go:87] duration metric: took 366.309314ms to configureAuth
	I1121 14:29:23.337129  249617 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:23.337306  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:23.337320  249617 machine.go:97] duration metric: took 3.89496072s to provisionDockerMachine
	I1121 14:29:23.337326  249617 client.go:176] duration metric: took 11.527957207s to LocalClient.Create
	I1121 14:29:23.337344  249617 start.go:167] duration metric: took 11.528071392s to libmachine.API.Create "old-k8s-version-012258"
	I1121 14:29:23.337352  249617 start.go:293] postStartSetup for "old-k8s-version-012258" (driver="docker")
	I1121 14:29:23.337365  249617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:23.337422  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:23.337471  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.359217  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.466089  249617 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:23.470146  249617 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:23.470174  249617 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:23.470185  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:23.470249  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:23.470349  249617 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:23.470480  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:23.479086  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:23.506776  249617 start.go:296] duration metric: took 169.402964ms for postStartSetup
	I1121 14:29:23.507166  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.527044  249617 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/config.json ...
	I1121 14:29:23.527374  249617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:23.527425  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.546669  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.645314  249617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:23.650498  249617 start.go:128] duration metric: took 11.844529266s to createHost
	I1121 14:29:23.650523  249617 start.go:83] releasing machines lock for "old-k8s-version-012258", held for 11.844683904s
	I1121 14:29:23.650592  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.671161  249617 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:23.671227  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.671321  249617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:23.671403  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.694189  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.694196  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.856609  249617 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:23.863273  249617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:23.867917  249617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:23.867991  249617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:23.895679  249617 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:23.895707  249617 start.go:496] detecting cgroup driver to use...
	I1121 14:29:23.895742  249617 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:23.895805  249617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:23.911897  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:23.925350  249617 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:23.925400  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:23.943424  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:23.962675  249617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:24.059689  249617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:24.169263  249617 docker.go:234] disabling docker service ...
	I1121 14:29:24.169325  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:24.191949  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:24.206181  249617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:24.319402  249617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:24.455060  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:24.472888  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:24.497138  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1121 14:29:24.524424  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:24.536491  249617 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:24.536702  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:24.547193  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.559919  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:24.571627  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.581977  249617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:24.629839  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:24.640310  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:24.650595  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:24.660801  249617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:24.669493  249617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:24.677810  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:24.781513  249617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:24.929576  249617 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:24.929707  249617 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:24.936782  249617 start.go:564] Will wait 60s for crictl version
	I1121 14:29:24.936893  249617 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.942453  249617 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:24.986447  249617 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:24.986527  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.018021  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.051308  249617 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1121 14:29:25.029036  255774 start.go:309] selected driver: docker
	I1121 14:29:25.029056  255774 start.go:930] validating driver "docker" against <nil>
	I1121 14:29:25.029071  255774 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:29:25.029977  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.123370  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.11156096 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.123696  255774 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:29:25.124078  255774 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:29:25.125758  255774 out.go:179] * Using Docker driver with root privileges
	I1121 14:29:25.127166  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.127249  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.127262  255774 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:29:25.127353  255774 start.go:353] cluster config:
	{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:25.129454  255774 out.go:179] * Starting "default-k8s-diff-port-376255" primary control-plane node in "default-k8s-diff-port-376255" cluster
	I1121 14:29:25.130961  255774 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:29:25.132637  255774 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:29:25.134190  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:25.134237  255774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 14:29:25.134251  255774 cache.go:65] Caching tarball of preloaded images
	I1121 14:29:25.134262  255774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:29:25.134379  255774 preload.go:238] Found /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1121 14:29:25.134391  255774 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:29:25.134520  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:25.134560  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json: {Name:mk1db0ba6952ac549a7eae06783e73916a7ad392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.161339  255774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:29:25.161363  255774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:29:25.161384  255774 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:29:25.161419  255774 start.go:360] acquireMachinesLock for default-k8s-diff-port-376255: {Name:mka18b3ecaec4bae205bc7951f90400738bef300 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:29:25.161518  255774 start.go:364] duration metric: took 79.824µs to acquireMachinesLock for "default-k8s-diff-port-376255"
	I1121 14:29:25.161561  255774 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:25.161653  255774 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:29:25.055066  249617 cli_runner.go:164] Run: docker network inspect old-k8s-version-012258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.085953  249617 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:25.093859  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.111432  249617 kubeadm.go:884] updating cluster {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:25.111671  249617 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:29:25.111753  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.143860  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.143888  249617 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:25.143953  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.174770  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.174789  249617 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:25.174797  249617 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1121 14:29:25.174897  249617 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-012258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:25.174970  249617 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:25.211311  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.211341  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.211371  249617 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:25.211401  249617 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-012258 NodeName:old-k8s-version-012258 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:25.211596  249617 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-012258"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:25.211673  249617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:29:25.224124  249617 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:25.224202  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:25.235430  249617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1121 14:29:25.254181  249617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:25.283842  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1121 14:29:25.302971  249617 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:25.309092  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.325170  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:25.438037  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:25.469767  249617 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258 for IP: 192.168.94.2
	I1121 14:29:25.469790  249617 certs.go:195] generating shared ca certs ...
	I1121 14:29:25.469811  249617 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.470023  249617 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:25.470095  249617 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:25.470105  249617 certs.go:257] generating profile certs ...
	I1121 14:29:25.470177  249617 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key
	I1121 14:29:25.470199  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt with IP's: []
	I1121 14:29:25.634340  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt ...
	I1121 14:29:25.634374  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt: {Name:mk5e1a3132436dad740351857d527e3c45fff4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648586  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key ...
	I1121 14:29:25.648625  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key: {Name:mk757010d91a13b26eb1340def496546bee9bf26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648791  249617 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc
	I1121 14:29:25.648816  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1121 14:29:25.817862  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc ...
	I1121 14:29:25.817892  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc: {Name:mk8a482343e99af6e8bdd7e52a6e5b813685beb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818099  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc ...
	I1121 14:29:25.818121  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc: {Name:mk4cf761e884b2a77e105e39ad6b0495b59b5aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818237  249617 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt
	I1121 14:29:25.818331  249617 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key
	I1121 14:29:25.818390  249617 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key
	I1121 14:29:25.818406  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt with IP's: []
	I1121 14:29:26.390351  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt ...
	I1121 14:29:26.390391  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt: {Name:mk37207f300780275f6aa5331fc436d60739196c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390599  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key ...
	I1121 14:29:26.390617  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key: {Name:mkff5d416178c38a50235608b783c3957bee8456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390849  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:26.390898  249617 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:26.390913  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:26.390946  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:26.390988  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:26.391029  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:26.391086  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:26.391817  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:26.418450  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:26.446063  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:26.469197  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:26.493823  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:29:26.526847  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:26.555176  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:25.915600  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:25.916118  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:25.916177  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:25.916228  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:25.948057  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:25.948080  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:25.948087  213058 cri.go:89] found id: ""
	I1121 14:29:25.948096  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:25.948160  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.952634  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.956801  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:25.956870  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:25.990988  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:25.991014  213058 cri.go:89] found id: ""
	I1121 14:29:25.991024  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:25.991083  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.995665  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:25.995736  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:26.031577  213058 cri.go:89] found id: ""
	I1121 14:29:26.031604  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.031612  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:26.031618  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:26.031665  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:26.064880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.064907  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.064912  213058 cri.go:89] found id: ""
	I1121 14:29:26.064922  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:26.064979  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.070274  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.075659  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:26.075731  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:26.108079  213058 cri.go:89] found id: ""
	I1121 14:29:26.108108  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.108118  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:26.108125  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:26.108181  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:26.138988  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:26.139018  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.139024  213058 cri.go:89] found id: ""
	I1121 14:29:26.139034  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:26.139096  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.143487  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.147564  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:26.147631  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:26.185747  213058 cri.go:89] found id: ""
	I1121 14:29:26.185774  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.185785  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:26.185793  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:26.185848  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:26.220265  213058 cri.go:89] found id: ""
	I1121 14:29:26.220296  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.220308  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:26.220321  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:26.220335  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.265042  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:26.265072  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:26.402636  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:26.402672  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:26.484531  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:26.484565  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:26.484581  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:26.534239  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:26.534294  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:26.579971  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:26.580016  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:26.643693  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:26.643727  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:26.683712  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:26.683748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:26.702800  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:26.702836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:26.741813  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:26.741845  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.812944  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:26.812997  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.855307  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:26.855347  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:24.308535  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1121 14:29:24.308619  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.317176  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1121 14:29:24.317245  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.318774  252125 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1121 14:29:24.318825  252125 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.318867  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.328208  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1121 14:29:24.328249  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1121 14:29:24.328291  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.328305  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.328664  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1121 14:29:24.328708  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1121 14:29:24.335839  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1121 14:29:24.335900  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.337631  252125 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1121 14:29:24.337672  252125 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.337713  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.346363  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.346443  252125 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1121 14:29:24.346484  252125 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.346517  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361284  252125 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1121 14:29:24.361331  252125 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.361375  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361424  252125 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1121 14:29:24.361445  252125 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.361477  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.366787  252125 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1121 14:29:24.366831  252125 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:29:24.366871  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379457  252125 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1121 14:29:24.379503  252125 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.379558  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379677  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.388569  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.388608  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.388658  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.388681  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.388574  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.418705  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.418763  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.427350  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.434639  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.434777  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.437430  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.437452  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.477986  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.478027  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.478099  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:29:24.478334  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:24.478136  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.485019  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.485026  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.489362  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.521124  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.521651  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:29:24.521767  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:24.553384  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:29:24.553425  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1121 14:29:24.553522  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:29:24.553632  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:24.553699  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1121 14:29:24.553755  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.553769  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:29:24.553803  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:29:24.553853  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:24.553860  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:24.553893  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:29:24.553920  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:29:24.553945  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:24.553945  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1121 14:29:24.565027  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:29:24.565077  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1121 14:29:24.565153  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:29:24.565169  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1121 14:29:24.574297  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:29:24.574338  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1121 14:29:24.574363  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:29:24.574390  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:29:24.574393  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1121 14:29:24.574407  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1121 14:29:24.784169  252125 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.784246  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.964305  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:29:25.029557  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.029626  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.445459  252125 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1121 14:29:25.445578  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691152  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.661495413s)
	I1121 14:29:26.691188  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:29:26.691209  252125 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691206  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.245604103s)
	I1121 14:29:26.691250  252125 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1121 14:29:26.691264  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691297  252125 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691347  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.696141  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.404441617s)
	I1121 14:29:28.100696  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.409327822s)
	I1121 14:29:28.100767  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:29:28.100803  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.100853  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.132780  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:25.163849  255774 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:29:25.164318  255774 start.go:159] libmachine.API.Create for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:25.164395  255774 client.go:173] LocalClient.Create starting
	I1121 14:29:25.164513  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem
	I1121 14:29:25.164575  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164605  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.164704  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem
	I1121 14:29:25.164760  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164776  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.165330  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:29:25.188513  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:29:25.188614  255774 network_create.go:284] running [docker network inspect default-k8s-diff-port-376255] to gather additional debugging logs...
	I1121 14:29:25.188640  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255
	W1121 14:29:25.213297  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 returned with exit code 1
	I1121 14:29:25.213338  255774 network_create.go:287] error running [docker network inspect default-k8s-diff-port-376255]: docker network inspect default-k8s-diff-port-376255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-376255 not found
	I1121 14:29:25.213435  255774 network_create.go:289] output of [docker network inspect default-k8s-diff-port-376255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-376255 not found
	
	** /stderr **
	I1121 14:29:25.213589  255774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.240844  255774 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
	I1121 14:29:25.241874  255774 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39921db0d513 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:e4:85:98:a5:e3} reservation:<nil>}
	I1121 14:29:25.242975  255774 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-36a8741c90a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:21:99:72:63:4a} reservation:<nil>}
	I1121 14:29:25.244042  255774 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-63d543fc8bbd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:58:40:d2:33:c4} reservation:<nil>}
	I1121 14:29:25.245269  255774 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb46e0}
	I1121 14:29:25.245303  255774 network_create.go:124] attempt to create docker network default-k8s-diff-port-376255 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:29:25.245384  255774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 default-k8s-diff-port-376255
	I1121 14:29:25.322210  255774 network_create.go:108] docker network default-k8s-diff-port-376255 192.168.85.0/24 created
	I1121 14:29:25.322244  255774 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-376255" container
	I1121 14:29:25.322309  255774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:29:25.346732  255774 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-376255 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:29:25.374919  255774 oci.go:103] Successfully created a docker volume default-k8s-diff-port-376255
	I1121 14:29:25.374994  255774 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-376255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --entrypoint /usr/bin/test -v default-k8s-diff-port-376255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:29:26.343288  255774 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-376255
	I1121 14:29:26.343370  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:26.343387  255774 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:29:26.343457  255774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:29:26.582319  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:26.606403  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:26.635408  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:26.661287  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:26.686582  249617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:26.703157  249617 ssh_runner.go:195] Run: openssl version
	I1121 14:29:26.712353  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:26.725593  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732381  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732523  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.774823  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:26.785127  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:26.796035  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800685  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800751  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.842185  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:26.852632  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:26.863838  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869571  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869642  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.922017  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:26.934065  249617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:26.939457  249617 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:26.939526  249617 kubeadm.go:401] StartCluster: {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:26.939648  249617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:26.939710  249617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:26.978114  249617 cri.go:89] found id: ""
	I1121 14:29:26.978192  249617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:26.989363  249617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:27.000529  249617 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:27.000603  249617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:27.012158  249617 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:27.012179  249617 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:27.012231  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:27.022084  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:27.022141  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:27.034139  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:27.044897  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:27.045038  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:27.056593  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.066532  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:27.066615  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.077925  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:27.088254  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:27.088320  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:27.098442  249617 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:27.205509  249617 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:27.290009  249617 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:29.388121  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:29.388594  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:29.388645  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:29.388690  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:29.416964  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:29.416991  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.416996  213058 cri.go:89] found id: ""
	I1121 14:29:29.417006  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:29.417074  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.421476  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.425483  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:29.425557  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:29.453687  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:29.453708  213058 cri.go:89] found id: ""
	I1121 14:29:29.453718  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:29.453783  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.458267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:29.458353  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:29.485804  213058 cri.go:89] found id: ""
	I1121 14:29:29.485865  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.485876  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:29.485883  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:29.485940  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:29.514265  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:29.514290  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.514294  213058 cri.go:89] found id: ""
	I1121 14:29:29.514302  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:29.514349  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.518626  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.522446  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:29.522501  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:29.549770  213058 cri.go:89] found id: ""
	I1121 14:29:29.549799  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.549811  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:29.549819  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:29.549868  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:29.577193  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.577217  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.577222  213058 cri.go:89] found id: ""
	I1121 14:29:29.577230  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:29.577288  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.581256  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.585291  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:29.585347  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:29.614632  213058 cri.go:89] found id: ""
	I1121 14:29:29.614664  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.614674  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:29.614682  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:29.614740  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:29.645697  213058 cri.go:89] found id: ""
	I1121 14:29:29.645721  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.645730  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:29.645741  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:29.645756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.675578  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:29.675607  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:29.718952  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:29.718990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:29.750089  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:29.750117  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:29.858708  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:29.858738  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.902976  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:29.903013  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.938083  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:29.938118  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.976329  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:29.976366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:29.991448  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:29.991485  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:30.053990  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:30.054015  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:30.054032  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:30.089042  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:30.089076  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:30.124498  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:30.124528  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.685601  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:32.686035  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:32.686089  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:32.686144  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:32.744948  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:32.745095  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:32.745132  213058 cri.go:89] found id: ""
	I1121 14:29:32.745169  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:32.745355  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.752020  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.760837  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:32.761106  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:32.807418  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:32.807451  213058 cri.go:89] found id: ""
	I1121 14:29:32.807462  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:32.807521  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.813216  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:32.813289  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:32.852598  213058 cri.go:89] found id: ""
	I1121 14:29:32.852633  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.852645  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:32.852653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:32.852711  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:32.889120  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.889144  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:32.889148  213058 cri.go:89] found id: ""
	I1121 14:29:32.889157  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:32.889211  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.894834  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.900572  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:32.900646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:32.937810  213058 cri.go:89] found id: ""
	I1121 14:29:32.937836  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.937846  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:32.937853  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:32.937914  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:32.975713  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:32.975735  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:32.975741  213058 cri.go:89] found id: ""
	I1121 14:29:32.975751  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:32.975815  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.981574  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.985965  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:32.986030  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:33.019894  213058 cri.go:89] found id: ""
	I1121 14:29:33.019923  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.019935  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:33.019949  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:33.020009  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:33.051872  213058 cri.go:89] found id: ""
	I1121 14:29:33.051901  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.051911  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:33.051923  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:33.051937  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:33.103114  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:33.103153  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:33.142816  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:33.142846  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:33.209677  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:33.209736  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:33.255185  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:33.255220  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:33.272562  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:33.272600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:33.319098  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:33.319132  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:33.366245  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:33.366286  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:33.410624  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:33.410660  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:33.458217  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:33.458253  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:33.586879  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:33.586919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:29.835800  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.734910291s)
	I1121 14:29:29.835838  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:29:29.835860  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835902  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835802  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.702989246s)
	I1121 14:29:29.835965  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:29:29.836056  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:29.840842  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:29:29.840873  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1121 14:29:32.866902  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (3.030968163s)
	I1121 14:29:32.866941  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:29:32.866961  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:32.867002  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:31.901829  255774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.558304176s)
	I1121 14:29:31.901864  255774 kic.go:203] duration metric: took 5.558473353s to extract preloaded images to volume ...
	W1121 14:29:31.901941  255774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:29:31.901969  255774 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:29:31.902010  255774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:29:31.985847  255774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-376255 --name default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --network default-k8s-diff-port-376255 --ip 192.168.85.2 --volume default-k8s-diff-port-376255:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:29:32.403824  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Running}}
	I1121 14:29:32.427802  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.456228  255774 cli_runner.go:164] Run: docker exec default-k8s-diff-port-376255 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:29:32.514766  255774 oci.go:144] the created container "default-k8s-diff-port-376255" has a running status.
	I1121 14:29:32.514799  255774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa...
	I1121 14:29:32.829505  255774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:29:32.861911  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.888316  255774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:29:32.888342  255774 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-376255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:29:32.948121  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.975355  255774 machine.go:94] provisionDockerMachine start ...
	I1121 14:29:32.975799  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:33.002463  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:33.002813  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:33.002834  255774 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:29:33.003677  255774 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37682->127.0.0.1:33070: read: connection reset by peer
	I1121 14:29:37.228254  249617 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:29:37.228434  249617 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:37.228644  249617 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:37.228822  249617 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:37.228907  249617 kubeadm.go:319] OS: Linux
	I1121 14:29:37.228971  249617 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:37.229029  249617 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:37.229111  249617 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:37.229198  249617 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:37.229264  249617 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:37.229333  249617 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:37.229403  249617 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:37.229468  249617 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:37.229624  249617 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:37.229762  249617 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:37.229892  249617 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:29:37.230051  249617 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.235113  249617 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:37.235306  249617 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:37.235508  249617 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:37.235691  249617 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:37.235858  249617 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:37.236102  249617 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:37.236205  249617 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:37.236303  249617 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:37.236516  249617 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236607  249617 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:37.236765  249617 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236861  249617 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:37.236954  249617 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:37.237021  249617 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:37.237104  249617 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:37.237178  249617 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:37.237257  249617 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:37.237352  249617 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:37.237438  249617 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:37.237554  249617 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:37.237649  249617 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:37.239227  249617 out.go:252]   - Booting up control plane ...
	I1121 14:29:37.239369  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:37.239534  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:37.239682  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:37.239829  249617 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:37.239965  249617 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:37.240022  249617 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:37.240260  249617 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:29:37.240373  249617 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.503152 seconds
	I1121 14:29:37.240759  249617 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:37.240933  249617 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:37.241035  249617 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:37.241286  249617 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-012258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:37.241409  249617 kubeadm.go:319] [bootstrap-token] Using token: yix385.n0xejrlt7sdx1ngs
	I1121 14:29:37.243198  249617 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:37.243379  249617 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:37.243497  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:37.243755  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:37.243946  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:37.244147  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:37.244287  249617 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:37.244477  249617 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:37.244564  249617 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:37.244632  249617 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:37.244642  249617 kubeadm.go:319] 
	I1121 14:29:37.244725  249617 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:37.244736  249617 kubeadm.go:319] 
	I1121 14:29:37.244834  249617 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:37.244845  249617 kubeadm.go:319] 
	I1121 14:29:37.244877  249617 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:37.244966  249617 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:37.245033  249617 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:37.245045  249617 kubeadm.go:319] 
	I1121 14:29:37.245111  249617 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:37.245120  249617 kubeadm.go:319] 
	I1121 14:29:37.245178  249617 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:37.245192  249617 kubeadm.go:319] 
	I1121 14:29:37.245274  249617 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:37.245371  249617 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:37.245468  249617 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:37.245476  249617 kubeadm.go:319] 
	I1121 14:29:37.245604  249617 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:37.245734  249617 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:37.245755  249617 kubeadm.go:319] 
	I1121 14:29:37.245866  249617 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246024  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:37.246062  249617 kubeadm.go:319] 	--control-plane 
	I1121 14:29:37.246072  249617 kubeadm.go:319] 
	I1121 14:29:37.246178  249617 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:37.246189  249617 kubeadm.go:319] 
	I1121 14:29:37.246294  249617 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246443  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:37.246454  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.246462  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.248274  249617 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:36.147516  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.147569  255774 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-376255"
	I1121 14:29:36.147633  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.169609  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.169898  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.169928  255774 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376255 && echo "default-k8s-diff-port-376255" | sudo tee /etc/hostname
	I1121 14:29:36.328958  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.329040  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.353105  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.353414  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.353448  255774 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376255/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:36.504067  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:36.504097  255774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:36.504119  255774 ubuntu.go:190] setting up certificates
	I1121 14:29:36.504133  255774 provision.go:84] configureAuth start
	I1121 14:29:36.504206  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:36.528674  255774 provision.go:143] copyHostCerts
	I1121 14:29:36.528752  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:36.528762  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:36.528840  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:36.528968  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:36.528997  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:36.529043  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:36.529141  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:36.529152  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:36.529188  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:36.529281  255774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376255 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-376255 localhost minikube]
	I1121 14:29:36.617208  255774 provision.go:177] copyRemoteCerts
	I1121 14:29:36.617283  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:36.617345  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.639948  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.749486  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:36.777360  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 14:29:36.804875  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:29:36.830920  255774 provision.go:87] duration metric: took 326.762892ms to configureAuth
	I1121 14:29:36.830953  255774 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:36.831165  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:36.831181  255774 machine.go:97] duration metric: took 3.855604158s to provisionDockerMachine
	I1121 14:29:36.831191  255774 client.go:176] duration metric: took 11.666782197s to LocalClient.Create
	I1121 14:29:36.831216  255774 start.go:167] duration metric: took 11.666902979s to libmachine.API.Create "default-k8s-diff-port-376255"
	I1121 14:29:36.831234  255774 start.go:293] postStartSetup for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:36.831254  255774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:36.831311  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:36.831360  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.855811  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.969760  255774 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:36.974452  255774 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:36.974529  255774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:36.974577  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:36.974658  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:36.974771  255774 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:36.974903  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:36.984975  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:37.017462  255774 start.go:296] duration metric: took 186.210262ms for postStartSetup
	I1121 14:29:37.017947  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.041309  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:37.041659  255774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:37.041731  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.070697  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.177189  255774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:37.185711  255774 start.go:128] duration metric: took 12.024042461s to createHost
	I1121 14:29:37.185741  255774 start.go:83] releasing machines lock for "default-k8s-diff-port-376255", held for 12.024206528s
	I1121 14:29:37.185820  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.211853  255774 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:37.211903  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.211965  255774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:37.212033  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.238575  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.242252  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.421321  255774 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:37.431728  255774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:37.437939  255774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:37.438053  255774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:37.469409  255774 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:37.469437  255774 start.go:496] detecting cgroup driver to use...
	I1121 14:29:37.469471  255774 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:37.469521  255774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:37.490669  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:37.507754  255774 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:37.507821  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:37.525644  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:37.545289  255774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:37.674060  255774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:37.795128  255774 docker.go:234] disabling docker service ...
	I1121 14:29:37.795198  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:37.819043  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:37.834819  255774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:37.960408  255774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:38.072269  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:38.089314  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:38.105248  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:29:38.117445  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:38.128509  255774 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:38.128607  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:38.139526  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.150896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:38.161459  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.173179  255774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:38.183645  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:38.194923  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:38.207896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:38.220346  255774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:38.230823  255774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:38.241807  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.339708  255774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:38.460319  255774 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:38.460387  255774 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:38.465812  255774 start.go:564] Will wait 60s for crictl version
	I1121 14:29:38.465875  255774 ssh_runner.go:195] Run: which crictl
	I1121 14:29:38.470166  255774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:38.507773  255774 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:38.507860  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.532247  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.559098  255774 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1121 14:29:33.655577  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:33.655599  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:33.655612  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.225853  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:36.226247  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:36.226304  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:36.226364  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:36.259583  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:36.259613  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.259619  213058 cri.go:89] found id: ""
	I1121 14:29:36.259628  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:36.259690  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.264798  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.269597  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:36.269663  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:36.304312  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:36.304335  213058 cri.go:89] found id: ""
	I1121 14:29:36.304346  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:36.304403  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.309760  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:36.309833  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:36.342617  213058 cri.go:89] found id: ""
	I1121 14:29:36.342643  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.342653  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:36.342660  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:36.342722  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:36.378880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.378909  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:36.378914  213058 cri.go:89] found id: ""
	I1121 14:29:36.378924  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:36.378996  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.384032  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.388866  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:36.388932  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:36.427253  213058 cri.go:89] found id: ""
	I1121 14:29:36.427282  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.427293  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:36.427300  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:36.427355  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:36.461581  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:36.461604  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:36.461609  213058 cri.go:89] found id: ""
	I1121 14:29:36.461618  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:36.461677  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.466623  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.471422  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:36.471490  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:36.503502  213058 cri.go:89] found id: ""
	I1121 14:29:36.503533  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.503566  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:36.503575  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:36.503633  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:36.538350  213058 cri.go:89] found id: ""
	I1121 14:29:36.538379  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.538390  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:36.538404  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:36.538419  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:36.666987  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:36.667025  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:36.685628  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:36.685659  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:36.763464  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:36.763491  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:36.763508  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.808789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:36.808832  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.887558  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:36.887596  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:36.952391  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:36.952434  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:36.993139  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:36.993167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:37.037499  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:37.037552  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:37.084237  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:37.084270  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:37.132236  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:37.132272  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:37.172720  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:37.172753  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:34.341753  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.474720913s)
	I1121 14:29:34.341781  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:29:34.341812  252125 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:34.341855  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:37.308520  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.966633628s)
	I1121 14:29:37.308585  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:29:37.308616  252125 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.308666  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.772300  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:29:37.772349  252125 cache_images.go:125] Successfully loaded all cached images
	I1121 14:29:37.772358  252125 cache_images.go:94] duration metric: took 13.627858156s to LoadCachedImages
	I1121 14:29:37.772375  252125 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1121 14:29:37.772522  252125 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-921956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:37.772622  252125 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:37.802988  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.803017  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.803041  252125 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:37.803067  252125 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-921956 NodeName:no-preload-921956 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:37.803212  252125 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-921956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:37.803298  252125 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.814189  252125 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:29:37.814255  252125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.824124  252125 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1121 14:29:37.824214  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1121 14:29:37.824231  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:29:37.824217  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1121 14:29:37.829417  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:29:37.829466  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1121 14:29:38.860713  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:29:38.875498  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:29:38.880447  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:29:38.880477  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1121 14:29:39.014274  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:29:39.021151  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:29:39.021187  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1121 14:29:39.234010  252125 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:39.244382  252125 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1121 14:29:39.259897  252125 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:39.279143  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1121 14:29:38.560688  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:38.580956  255774 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:38.585728  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.599140  255774 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:38.599295  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:38.599391  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.631637  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.631660  255774 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:38.631720  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.665498  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.665522  255774 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:38.665530  255774 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1121 14:29:38.665659  255774 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:38.665752  255774 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:38.694106  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:38.694138  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:38.694156  255774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:38.694182  255774 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376255 NodeName:default-k8s-diff-port-376255 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:38.694318  255774 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-376255"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:38.694377  255774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:38.704016  255774 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:38.704074  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:38.712471  255774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1121 14:29:38.726311  255774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:38.743589  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1121 14:29:38.759275  255774 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:38.763723  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.775814  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.870850  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:38.898876  255774 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255 for IP: 192.168.85.2
	I1121 14:29:38.898898  255774 certs.go:195] generating shared ca certs ...
	I1121 14:29:38.898917  255774 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:38.899068  255774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:38.899116  255774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:38.899130  255774 certs.go:257] generating profile certs ...
	I1121 14:29:38.899196  255774 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key
	I1121 14:29:38.899223  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt with IP's: []
	I1121 14:29:39.101636  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt ...
	I1121 14:29:39.101669  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: {Name:mk48f410a390b01d5b10a9357a2648374ae8306b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.101873  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key ...
	I1121 14:29:39.101885  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key: {Name:mkb89c45215e08640f5b5fa9a6de6863ea0983e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.102008  255774 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066
	I1121 14:29:39.102024  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:29:39.438352  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 ...
	I1121 14:29:39.438387  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066: {Name:mkc5f7dc938a9541dec0c2accd850515b39a25d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438574  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 ...
	I1121 14:29:39.438586  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066: {Name:mka67f2d91e35acd02a0ed4174188db6877ef796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438666  255774 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt
	I1121 14:29:39.438744  255774 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key
	I1121 14:29:39.438811  255774 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key
	I1121 14:29:39.438826  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt with IP's: []
	I1121 14:29:39.523793  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt ...
	I1121 14:29:39.523827  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt: {Name:mk2418751bb08ae4f2cae2628ba430b2e731f823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524011  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key ...
	I1121 14:29:39.524031  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key: {Name:mk12031f310020bd38886fd870544563c6ab1faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524255  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:39.524307  255774 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:39.524323  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:39.524353  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:39.524383  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:39.524407  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:39.524445  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:39.525071  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:39.546065  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:39.565880  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:39.585450  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:39.604394  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 14:29:39.623736  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:39.642460  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:39.661463  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:39.681314  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:39.879137  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:39.899730  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:39.918630  255774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:39.935942  255774 ssh_runner.go:195] Run: openssl version
	I1121 14:29:39.943062  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.020861  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026152  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026209  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.067681  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.077051  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.087944  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092369  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092434  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.132125  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.142255  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.152828  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157171  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157265  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.199881  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.210053  255774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.214456  255774 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.214524  255774 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.214625  255774 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.214692  255774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.249359  255774 cri.go:89] found id: ""
	I1121 14:29:40.249429  255774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.259121  255774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.270847  255774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.270910  255774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.283266  255774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.283287  255774 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.283341  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1121 14:29:40.293676  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.293725  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.303277  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.313015  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.313073  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.322086  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.330920  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.331015  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.339376  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.347984  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.348046  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.356683  255774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.404354  255774 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.404455  255774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.435448  255774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.435583  255774 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.435628  255774 kubeadm.go:319] OS: Linux
	I1121 14:29:40.435689  255774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.435827  255774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.435905  255774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.436039  255774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.436108  255774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.436176  255774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.436276  255774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.436351  255774 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.508224  255774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.508370  255774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.508531  255774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.513996  255774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:39.295828  252125 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:39.301164  252125 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:39.312709  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:39.400897  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:39.429294  252125 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956 for IP: 192.168.103.2
	I1121 14:29:39.429315  252125 certs.go:195] generating shared ca certs ...
	I1121 14:29:39.429332  252125 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.429485  252125 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:39.429583  252125 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:39.429600  252125 certs.go:257] generating profile certs ...
	I1121 14:29:39.429678  252125 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key
	I1121 14:29:39.429693  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt with IP's: []
	I1121 14:29:39.556088  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt ...
	I1121 14:29:39.556115  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: {Name:mkc697edce2d4ccb5a4a2ccbe74255aef4a205c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556297  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key ...
	I1121 14:29:39.556312  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key: {Name:mkad7b167b883af61314c3f8b6c71358edc782dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556419  252125 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d
	I1121 14:29:39.556435  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1121 14:29:39.871499  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d ...
	I1121 14:29:39.871529  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d: {Name:mkc839b1c936af809ed1159ef4599336fd260d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871726  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d ...
	I1121 14:29:39.871748  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d: {Name:mkc2f0abcac84f6547f3e0edb165e90b14fdd7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871882  252125 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt
	I1121 14:29:39.871997  252125 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key
	I1121 14:29:39.872096  252125 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key
	I1121 14:29:39.872120  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt with IP's: []
	I1121 14:29:40.083173  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt ...
	I1121 14:29:40.083201  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt: {Name:mkba7efd029f616230e0b3cf14c4f32abac0549e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083385  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key ...
	I1121 14:29:40.083414  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key: {Name:mk24f6fbb57f5dfce4a401be193e0a832a6ccf6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083661  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:40.083700  252125 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:40.083711  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:40.083749  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:40.083780  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:40.083827  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:40.083887  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:40.084653  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:40.106430  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:40.126520  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:40.148412  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:40.169973  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:29:40.191493  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:29:40.214458  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:40.234692  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:29:40.261986  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:40.352437  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:40.372804  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:40.394700  252125 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:40.411183  252125 ssh_runner.go:195] Run: openssl version
	I1121 14:29:40.419607  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.431060  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436371  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436429  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.481320  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.492797  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.502878  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507432  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507499  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.567779  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.577673  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.587826  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592472  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592528  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.627626  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.637464  252125 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.641884  252125 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.641943  252125 kubeadm.go:401] StartCluster: {Name:no-preload-921956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.642030  252125 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.642085  252125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.673351  252125 cri.go:89] found id: ""
	I1121 14:29:40.673423  252125 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.682715  252125 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.691493  252125 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.691581  252125 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.700143  252125 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.700160  252125 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.700205  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:40.708734  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.708799  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.717135  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.726191  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.726262  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.734074  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.742647  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.742709  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.751091  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.759770  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.759841  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.768253  252125 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.810825  252125 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.810892  252125 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.831836  252125 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.831940  252125 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.832026  252125 kubeadm.go:319] OS: Linux
	I1121 14:29:40.832115  252125 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.832212  252125 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.832286  252125 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.832358  252125 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.832432  252125 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.832504  252125 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.832668  252125 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.832735  252125 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.895341  252125 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.895491  252125 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.895637  252125 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.901358  252125 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.249631  249617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:37.262987  249617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:29:37.263020  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:37.283444  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:38.138719  249617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:38.138808  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.138810  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-012258 minikube.k8s.io/updated_at=2025_11_21T14_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-012258 minikube.k8s.io/primary=true
	I1121 14:29:38.150782  249617 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:38.225220  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.726231  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.225533  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.725591  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.225601  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.725734  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:41.226112  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.521190  255774 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.521325  255774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.521431  255774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.003970  255774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.240665  255774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.425685  255774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:41.689428  255774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:41.923373  255774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:41.923563  255774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.051973  255774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.052979  255774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.277531  255774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:42.491572  255774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:42.605458  255774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:42.605535  255774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:42.870659  255774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:43.039072  255774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:43.228611  255774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:43.489903  255774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:43.563271  255774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:43.563948  255774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:43.568453  255774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:39.727688  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:39.728083  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:39.728134  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:39.728197  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:39.758413  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:39.758436  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:39.758441  213058 cri.go:89] found id: ""
	I1121 14:29:39.758452  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:39.758508  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.763439  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.767912  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:39.767980  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:39.802923  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:39.802948  213058 cri.go:89] found id: ""
	I1121 14:29:39.802957  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:39.803013  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.807778  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:39.807853  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:39.835286  213058 cri.go:89] found id: ""
	I1121 14:29:39.835314  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.835335  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:39.835343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:39.835408  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:39.864986  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:39.865034  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:39.865040  213058 cri.go:89] found id: ""
	I1121 14:29:39.865050  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:39.865105  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.869441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.873676  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:39.873739  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:39.902671  213058 cri.go:89] found id: ""
	I1121 14:29:39.902698  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.902707  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:39.902715  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:39.902762  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:39.933452  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:39.933477  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:39.933483  213058 cri.go:89] found id: ""
	I1121 14:29:39.933492  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:39.933557  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.938051  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.942029  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:39.942094  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:39.969991  213058 cri.go:89] found id: ""
	I1121 14:29:39.970018  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.970028  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:39.970036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:39.970086  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:39.997381  213058 cri.go:89] found id: ""
	I1121 14:29:39.997406  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.997417  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:39.997429  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:39.997443  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:40.027188  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:40.027213  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:40.067878  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:40.067906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:40.101358  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:40.101388  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:40.115674  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:40.115704  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:40.153845  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:40.153871  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:40.188913  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:40.188944  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:40.244995  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:40.245033  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:40.351506  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:40.351558  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:40.417221  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:40.417244  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:40.417263  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:40.457789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:40.457836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.520712  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:40.520748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.056648  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:43.057094  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:43.057150  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:43.057204  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:43.085236  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.085260  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.085265  213058 cri.go:89] found id: ""
	I1121 14:29:43.085275  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:43.085333  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.089868  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.094074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:43.094134  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:43.122420  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.122447  213058 cri.go:89] found id: ""
	I1121 14:29:43.122457  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:43.122512  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.126830  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:43.126892  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:43.156518  213058 cri.go:89] found id: ""
	I1121 14:29:43.156566  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.156577  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:43.156584  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:43.156646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:43.185212  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:43.185233  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.185238  213058 cri.go:89] found id: ""
	I1121 14:29:43.185277  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:43.185338  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.190000  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.194074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:43.194131  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:43.224175  213058 cri.go:89] found id: ""
	I1121 14:29:43.224201  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.224211  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:43.224218  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:43.224277  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:43.258260  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:43.258292  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.258299  213058 cri.go:89] found id: ""
	I1121 14:29:43.258310  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:43.258378  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.263276  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.268195  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:43.268264  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:43.303269  213058 cri.go:89] found id: ""
	I1121 14:29:43.303300  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.303311  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:43.303319  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:43.303379  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:43.333956  213058 cri.go:89] found id: ""
	I1121 14:29:43.333985  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.333995  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:43.334007  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:43.334021  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:43.366338  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:43.366369  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:43.458987  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:43.459027  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.497960  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:43.497995  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.539997  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:43.540035  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.575882  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:43.575911  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.903405  252125 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.903502  252125 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.903630  252125 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.180390  252125 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.211121  252125 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.523007  252125 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:42.461521  252125 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:42.641495  252125 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:42.641701  252125 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.773640  252125 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.773843  252125 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.921369  252125 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:43.256203  252125 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:43.834470  252125 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:43.834645  252125 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:43.949422  252125 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:44.093777  252125 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:44.227287  252125 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:44.509482  252125 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:44.696294  252125 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:44.696767  252125 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:44.705846  252125 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:43.573374  255774 out.go:252]   - Booting up control plane ...
	I1121 14:29:43.573510  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:43.573669  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:43.573781  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:43.590344  255774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:43.590494  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:43.599838  255774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:43.600184  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:43.600247  255774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:43.720721  255774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:43.720878  255774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:44.721899  255774 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001196965s
	I1121 14:29:44.724830  255774 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:44.724972  255774 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1121 14:29:44.725131  255774 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:44.725253  255774 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:41.726266  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.225460  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.725727  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.225740  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.725669  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.225350  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.725651  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.226025  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.725289  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:46.226316  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.632243  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:43.632278  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.681909  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:43.681959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.723402  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:43.723454  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:43.776606  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:43.776641  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:43.793171  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:43.793200  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:43.854264  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:43.854293  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:43.854308  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.383659  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:46.384075  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:46.384128  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:46.384191  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:46.441629  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.441734  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:46.441754  213058 cri.go:89] found id: ""
	I1121 14:29:46.441776  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:46.441873  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.447714  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.453337  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:46.453422  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:46.497451  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.497475  213058 cri.go:89] found id: ""
	I1121 14:29:46.497485  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:46.497585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.504731  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:46.504801  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:46.562972  213058 cri.go:89] found id: ""
	I1121 14:29:46.563014  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.563027  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:46.563036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:46.563287  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:46.611186  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:46.611216  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:46.611221  213058 cri.go:89] found id: ""
	I1121 14:29:46.611231  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:46.611289  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.620404  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.626388  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:46.626559  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:46.674192  213058 cri.go:89] found id: ""
	I1121 14:29:46.674247  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.674259  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:46.674267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:46.674448  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:46.749738  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.749765  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:46.749771  213058 cri.go:89] found id: ""
	I1121 14:29:46.749780  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:46.749835  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.756273  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.763986  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:46.764120  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:46.811858  213058 cri.go:89] found id: ""
	I1121 14:29:46.811883  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.811901  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:46.811909  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:46.811963  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:46.849599  213058 cri.go:89] found id: ""
	I1121 14:29:46.849645  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.849655  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:46.849666  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:46.849683  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.913988  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:46.914024  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.953189  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:46.953227  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:47.001663  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:47.001705  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:47.041106  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:47.041137  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:47.107673  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:47.107712  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:47.240432  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:47.240473  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:47.288852  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:47.288894  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:46.531314  255774 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.80645272s
	I1121 14:29:47.509316  255774 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.784421033s
	I1121 14:29:49.226647  255774 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501794549s
	I1121 14:29:49.239409  255774 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:49.252719  255774 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:49.264076  255774 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:49.264371  255774 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-376255 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:49.274799  255774 kubeadm.go:319] [bootstrap-token] Using token: 8nwcfl.9utqukqcvuro6a4p
	I1121 14:29:44.769338  252125 out.go:252]   - Booting up control plane ...
	I1121 14:29:44.769476  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:44.769652  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:44.769771  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:44.769940  252125 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:44.770087  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:44.778391  252125 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:44.779655  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:44.779729  252125 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:44.894196  252125 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:44.894364  252125 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:45.895053  252125 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000974959s
	I1121 14:29:45.898754  252125 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:45.898875  252125 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1121 14:29:45.899003  252125 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:45.899149  252125 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:48.621169  252125 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.722350043s
	I1121 14:29:49.059709  252125 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.160801257s
	I1121 14:29:49.276414  255774 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:49.276590  255774 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:49.280532  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:49.287374  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:49.290401  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:49.293308  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:49.297552  255774 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:49.632747  255774 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:46.726037  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.228665  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.725338  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.226199  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.725959  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.225812  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.725337  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.225293  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.310282  249617 kubeadm.go:1114] duration metric: took 12.17154172s to wait for elevateKubeSystemPrivileges
	I1121 14:29:50.310322  249617 kubeadm.go:403] duration metric: took 23.370802852s to StartCluster
	I1121 14:29:50.310347  249617 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.310438  249617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:50.311864  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.312167  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:50.312169  249617 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:50.312267  249617 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:50.312352  249617 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312372  249617 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-012258"
	I1121 14:29:50.312403  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.312458  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:50.312516  249617 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312530  249617 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-012258"
	I1121 14:29:50.312827  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.312965  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.314603  249617 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:50.316238  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:50.339724  249617 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:50.056893  255774 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:50.634602  255774 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:50.635720  255774 kubeadm.go:319] 
	I1121 14:29:50.635840  255774 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:50.635916  255774 kubeadm.go:319] 
	I1121 14:29:50.636085  255774 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:50.636139  255774 kubeadm.go:319] 
	I1121 14:29:50.636189  255774 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:50.636300  255774 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:50.636386  255774 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:50.636448  255774 kubeadm.go:319] 
	I1121 14:29:50.636574  255774 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:50.636584  255774 kubeadm.go:319] 
	I1121 14:29:50.636647  255774 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:50.636652  255774 kubeadm.go:319] 
	I1121 14:29:50.636709  255774 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:50.636796  255774 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:50.636878  255774 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:50.636886  255774 kubeadm.go:319] 
	I1121 14:29:50.636981  255774 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:50.637083  255774 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:50.637090  255774 kubeadm.go:319] 
	I1121 14:29:50.637247  255774 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637414  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:50.637449  255774 kubeadm.go:319] 	--control-plane 
	I1121 14:29:50.637460  255774 kubeadm.go:319] 
	I1121 14:29:50.637571  255774 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:50.637580  255774 kubeadm.go:319] 
	I1121 14:29:50.637672  255774 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637785  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:50.642202  255774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:50.642513  255774 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:50.642647  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:50.642693  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:50.645524  255774 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:50.339929  249617 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-012258"
	I1121 14:29:50.339977  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.340433  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.341133  249617 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.341154  249617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:50.341208  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.377822  249617 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.377846  249617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:50.377844  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.377907  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.410483  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.415901  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:50.468678  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:50.503643  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.536480  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.667362  249617 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:50.668484  249617 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:29:50.954598  249617 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:50.401999  252125 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502477764s
	I1121 14:29:50.419850  252125 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:50.933016  252125 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:50.948821  252125 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:50.949093  252125 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-921956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:50.961417  252125 kubeadm.go:319] [bootstrap-token] Using token: uhuim0.7wh8hbt7v76eo7qs
	I1121 14:29:50.955828  249617 addons.go:530] duration metric: took 643.55365ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:51.174831  249617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-012258" context rescaled to 1 replicas
	I1121 14:29:50.963415  252125 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:50.963588  252125 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:50.971176  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:50.980644  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:50.985255  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:50.989946  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:50.994015  252125 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:51.128309  252125 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:51.550178  252125 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:52.128624  252125 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:52.129402  252125 kubeadm.go:319] 
	I1121 14:29:52.129496  252125 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:52.129528  252125 kubeadm.go:319] 
	I1121 14:29:52.129657  252125 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:52.129669  252125 kubeadm.go:319] 
	I1121 14:29:52.129705  252125 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:52.129798  252125 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:52.129906  252125 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:52.129923  252125 kubeadm.go:319] 
	I1121 14:29:52.129995  252125 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:52.130004  252125 kubeadm.go:319] 
	I1121 14:29:52.130078  252125 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:52.130087  252125 kubeadm.go:319] 
	I1121 14:29:52.130170  252125 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:52.130304  252125 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:52.130418  252125 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:52.130446  252125 kubeadm.go:319] 
	I1121 14:29:52.130574  252125 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:52.130677  252125 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:52.130685  252125 kubeadm.go:319] 
	I1121 14:29:52.130797  252125 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.130966  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:52.131000  252125 kubeadm.go:319] 	--control-plane 
	I1121 14:29:52.131035  252125 kubeadm.go:319] 
	I1121 14:29:52.131212  252125 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:52.131230  252125 kubeadm.go:319] 
	I1121 14:29:52.131343  252125 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.131485  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:52.132830  252125 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:52.132967  252125 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:52.133003  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:52.133014  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:52.134968  252125 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:52.136241  252125 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:52.141107  252125 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:52.141131  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:52.155585  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:52.395340  252125 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:52.395422  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.395526  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-921956 minikube.k8s.io/updated_at=2025_11_21T14_29_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-921956 minikube.k8s.io/primary=true
	I1121 14:29:52.481012  252125 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:52.481125  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.982198  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.481748  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.981282  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.646815  255774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:50.654615  255774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:50.654642  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:50.673887  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:50.944978  255774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:50.945143  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.945309  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-376255 minikube.k8s.io/updated_at=2025_11_21T14_29_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=default-k8s-diff-port-376255 minikube.k8s.io/primary=true
	I1121 14:29:50.960009  255774 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:51.036596  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:51.537134  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.037345  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.536941  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.037592  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.536966  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.036678  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.536697  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.037499  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.536808  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.610391  255774 kubeadm.go:1114] duration metric: took 4.665295307s to wait for elevateKubeSystemPrivileges
	I1121 14:29:55.610426  255774 kubeadm.go:403] duration metric: took 15.395907943s to StartCluster
	I1121 14:29:55.610448  255774 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.610511  255774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:55.612071  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.612346  255774 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:55.612498  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:55.612612  255774 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:55.612696  255774 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612713  255774 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.612745  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.612775  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:55.612835  255774 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612852  255774 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376255"
	I1121 14:29:55.613218  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613392  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613476  255774 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:55.615420  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:55.641842  255774 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.641893  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.642317  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.647007  255774 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:55.648771  255774 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.648807  255774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:55.648882  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.679690  255774 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.679713  255774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:55.679780  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.680868  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.703091  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.713751  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:55.781953  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:55.795189  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.811872  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.895061  255774 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:55.896386  255774 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:29:56.162438  255774 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:29:52.672645  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:55.172665  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:29:54.481750  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.981303  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.481778  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.981846  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.481336  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.981822  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:57.056720  252125 kubeadm.go:1114] duration metric: took 4.66135199s to wait for elevateKubeSystemPrivileges
	I1121 14:29:57.056760  252125 kubeadm.go:403] duration metric: took 16.414821557s to StartCluster
	I1121 14:29:57.056783  252125 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.056866  252125 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:57.059279  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.059591  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:57.059595  252125 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:57.059668  252125 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:57.059755  252125 addons.go:70] Setting storage-provisioner=true in profile "no-preload-921956"
	I1121 14:29:57.059780  252125 addons.go:239] Setting addon storage-provisioner=true in "no-preload-921956"
	I1121 14:29:57.059783  252125 addons.go:70] Setting default-storageclass=true in profile "no-preload-921956"
	I1121 14:29:57.059810  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.059818  252125 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:57.059810  252125 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-921956"
	I1121 14:29:57.060267  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.060366  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.061615  252125 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:57.063049  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:57.087511  252125 addons.go:239] Setting addon default-storageclass=true in "no-preload-921956"
	I1121 14:29:57.087574  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.088046  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.088842  252125 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:57.090553  252125 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.090577  252125 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:57.090634  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.113518  252125 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.113567  252125 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:57.113644  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.116604  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.140626  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.162241  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:57.221336  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:57.237060  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.259845  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.393470  252125 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:57.394577  252125 node_ready.go:35] waiting up to 6m0s for node "no-preload-921956" to be "Ready" ...
	I1121 14:29:57.623024  252125 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:57.414885  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.125971322s)
	W1121 14:29:57.414929  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1121 14:29:57.414939  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:57.414952  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:57.462838  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:57.462881  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:57.526637  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:57.526671  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:57.574224  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:57.574259  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:57.624430  252125 addons.go:530] duration metric: took 564.759261ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:57.898009  252125 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-921956" context rescaled to 1 replicas
	I1121 14:29:56.163632  255774 addons.go:530] duration metric: took 551.031985ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:56.399602  255774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-376255" context rescaled to 1 replicas
	W1121 14:29:57.899680  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:29:57.174208  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:59.672116  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:00.114035  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1121 14:29:59.398191  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:01.898360  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:29:59.900344  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.900816  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:04.400331  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.672252  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:30:04.171805  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:05.672011  249617 node_ready.go:49] node "old-k8s-version-012258" is "Ready"
	I1121 14:30:05.672046  249617 node_ready.go:38] duration metric: took 15.003519412s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:30:05.672064  249617 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:05.672125  249617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:05.689799  249617 api_server.go:72] duration metric: took 15.377593574s to wait for apiserver process to appear ...
	I1121 14:30:05.689974  249617 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:05.690001  249617 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:30:05.696217  249617 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:30:05.697950  249617 api_server.go:141] control plane version: v1.28.0
	I1121 14:30:05.697978  249617 api_server.go:131] duration metric: took 7.994891ms to wait for apiserver health ...
	I1121 14:30:05.697990  249617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:05.702726  249617 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:05.702769  249617 system_pods.go:61] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.702778  249617 system_pods.go:61] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.702785  249617 system_pods.go:61] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.702796  249617 system_pods.go:61] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.702808  249617 system_pods.go:61] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.702818  249617 system_pods.go:61] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.702822  249617 system_pods.go:61] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.702829  249617 system_pods.go:61] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.702837  249617 system_pods.go:74] duration metric: took 4.84094ms to wait for pod list to return data ...
	I1121 14:30:05.702852  249617 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:05.705127  249617 default_sa.go:45] found service account: "default"
	I1121 14:30:05.705151  249617 default_sa.go:55] duration metric: took 2.290103ms for default service account to be created ...
	I1121 14:30:05.705161  249617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:05.710235  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.710318  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.710330  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.710337  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.710367  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.710374  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.710380  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.710385  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.710404  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.710597  249617 retry.go:31] will retry after 257.065607ms: missing components: kube-dns
	I1121 14:30:05.972608  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.972648  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.972657  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.972665  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.972676  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.972682  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.972687  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.972692  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.972707  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.972726  249617 retry.go:31] will retry after 339.692313ms: missing components: kube-dns
	I1121 14:30:06.317124  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:06.317155  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Running
	I1121 14:30:06.317160  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:06.317163  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:06.317167  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:06.317171  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:06.317175  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:06.317178  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:06.317181  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Running
	I1121 14:30:06.317188  249617 system_pods.go:126] duration metric: took 612.020803ms to wait for k8s-apps to be running ...
	I1121 14:30:06.317194  249617 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:06.317250  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:06.332295  249617 system_svc.go:56] duration metric: took 15.088564ms WaitForService to wait for kubelet
	I1121 14:30:06.332331  249617 kubeadm.go:587] duration metric: took 16.020134285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:06.332357  249617 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:06.338044  249617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:06.338071  249617 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:06.338084  249617 node_conditions.go:105] duration metric: took 5.72136ms to run NodePressure ...
	I1121 14:30:06.338096  249617 start.go:242] waiting for startup goroutines ...
	I1121 14:30:06.338102  249617 start.go:247] waiting for cluster config update ...
	I1121 14:30:06.338113  249617 start.go:256] writing updated cluster config ...
	I1121 14:30:06.338382  249617 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:06.342534  249617 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:06.347323  249617 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.352062  249617 pod_ready.go:94] pod "coredns-5dd5756b68-vst4c" is "Ready"
	I1121 14:30:06.352087  249617 pod_ready.go:86] duration metric: took 4.697932ms for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.354946  249617 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.359326  249617 pod_ready.go:94] pod "etcd-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.359355  249617 pod_ready.go:86] duration metric: took 4.388182ms for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.362007  249617 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.366060  249617 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.366081  249617 pod_ready.go:86] duration metric: took 4.051984ms for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.368789  249617 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.746914  249617 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.746952  249617 pod_ready.go:86] duration metric: took 378.141903ms for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.947790  249617 pod_ready.go:83] waiting for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.347266  249617 pod_ready.go:94] pod "kube-proxy-wsp2w" is "Ready"
	I1121 14:30:07.347291  249617 pod_ready.go:86] duration metric: took 399.477159ms for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.547233  249617 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946728  249617 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-012258" is "Ready"
	I1121 14:30:07.946756  249617 pod_ready.go:86] duration metric: took 399.500525ms for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946772  249617 pod_ready.go:40] duration metric: took 1.604187461s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.009909  249617 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1121 14:30:08.014607  249617 out.go:203] 
	W1121 14:30:08.016075  249617 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:30:08.020782  249617 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:30:08.022622  249617 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-012258" cluster and "default" namespace by default
	I1121 14:30:05.115052  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1121 14:30:05.115115  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:05.115188  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:05.143819  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.143839  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.143843  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:05.143846  213058 cri.go:89] found id: ""
	I1121 14:30:05.143853  213058 logs.go:282] 3 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:05.143912  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.148585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.152984  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.156944  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:05.157004  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:05.185404  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.185430  213058 cri.go:89] found id: ""
	I1121 14:30:05.185440  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:05.185498  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.190360  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:05.190432  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:05.222964  213058 cri.go:89] found id: ""
	I1121 14:30:05.222989  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.222999  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:05.223006  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:05.223058  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:05.254414  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:05.254436  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:05.254440  213058 cri.go:89] found id: ""
	I1121 14:30:05.254447  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:05.254505  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.258766  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.262456  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:05.262524  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:05.288454  213058 cri.go:89] found id: ""
	I1121 14:30:05.288486  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.288496  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:05.288505  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:05.288598  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:05.317814  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:05.317841  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:05.317847  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.317851  213058 cri.go:89] found id: ""
	I1121 14:30:05.317861  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:05.317930  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.322506  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.326684  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.330828  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:05.330957  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:05.360073  213058 cri.go:89] found id: ""
	I1121 14:30:05.360098  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.360107  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:05.360116  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:05.360171  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:05.388524  213058 cri.go:89] found id: ""
	I1121 14:30:05.388561  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.388573  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:05.388587  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:05.388602  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.427247  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:05.427279  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:05.517583  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:05.517615  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.556205  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:30:05.556238  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.601637  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:05.601692  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.642125  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:05.642167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:05.707252  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:05.707295  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:05.747947  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:05.747990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:05.767646  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:05.767678  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:04.398534  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.897181  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:08.897492  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.900285  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	I1121 14:30:07.400113  255774 node_ready.go:49] node "default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:07.400148  255774 node_ready.go:38] duration metric: took 11.503726167s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:30:07.400166  255774 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:07.400227  255774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:07.416428  255774 api_server.go:72] duration metric: took 11.804040955s to wait for apiserver process to appear ...
	I1121 14:30:07.416462  255774 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:07.416487  255774 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 14:30:07.423355  255774 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 14:30:07.424441  255774 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:07.424471  255774 api_server.go:131] duration metric: took 8.001103ms to wait for apiserver health ...
	I1121 14:30:07.424480  255774 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:07.428816  255774 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:07.428856  255774 system_pods.go:61] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.428866  255774 system_pods.go:61] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.428874  255774 system_pods.go:61] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.428880  255774 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.428886  255774 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.428891  255774 system_pods.go:61] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.428899  255774 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.428912  255774 system_pods.go:61] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.428921  255774 system_pods.go:74] duration metric: took 4.433771ms to wait for pod list to return data ...
	I1121 14:30:07.428932  255774 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:07.431771  255774 default_sa.go:45] found service account: "default"
	I1121 14:30:07.431794  255774 default_sa.go:55] duration metric: took 2.856811ms for default service account to be created ...
	I1121 14:30:07.431804  255774 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:07.435787  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.435816  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.435821  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.435826  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.435830  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.435833  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.435836  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.435841  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.435846  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.435871  255774 retry.go:31] will retry after 217.060579ms: missing components: kube-dns
	I1121 14:30:07.656900  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.656930  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.656937  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.656945  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.656950  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.656955  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.656959  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.656964  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.656970  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.656989  255774 retry.go:31] will retry after 330.648304ms: missing components: kube-dns
	I1121 14:30:07.995514  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.995612  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.995626  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.995636  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.995642  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.995653  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.995659  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.995664  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.995683  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.995713  255774 retry.go:31] will retry after 466.383408ms: missing components: kube-dns
	I1121 14:30:08.466385  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:08.466414  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Running
	I1121 14:30:08.466419  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:08.466423  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:08.466427  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:08.466430  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:08.466435  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:08.466438  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:08.466441  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Running
	I1121 14:30:08.466448  255774 system_pods.go:126] duration metric: took 1.034639333s to wait for k8s-apps to be running ...
	I1121 14:30:08.466454  255774 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:08.466495  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:08.480058  255774 system_svc.go:56] duration metric: took 13.59071ms WaitForService to wait for kubelet
	I1121 14:30:08.480087  255774 kubeadm.go:587] duration metric: took 12.867708638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:08.480104  255774 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:08.483054  255774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:08.483077  255774 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:08.483089  255774 node_conditions.go:105] duration metric: took 2.980591ms to run NodePressure ...
	I1121 14:30:08.483101  255774 start.go:242] waiting for startup goroutines ...
	I1121 14:30:08.483107  255774 start.go:247] waiting for cluster config update ...
	I1121 14:30:08.483116  255774 start.go:256] writing updated cluster config ...
	I1121 14:30:08.483378  255774 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:08.487457  255774 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.490869  255774 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.495613  255774 pod_ready.go:94] pod "coredns-66bc5c9577-fr27b" is "Ready"
	I1121 14:30:08.495638  255774 pod_ready.go:86] duration metric: took 4.745112ms for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.498070  255774 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.502098  255774 pod_ready.go:94] pod "etcd-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.502122  255774 pod_ready.go:86] duration metric: took 4.029361ms for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.504276  255774 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.508229  255774 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.508250  255774 pod_ready.go:86] duration metric: took 3.957821ms for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.510387  255774 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.891344  255774 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.891369  255774 pod_ready.go:86] duration metric: took 380.959206ms for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.091636  255774 pod_ready.go:83] waiting for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.492078  255774 pod_ready.go:94] pod "kube-proxy-hdplf" is "Ready"
	I1121 14:30:09.492108  255774 pod_ready.go:86] duration metric: took 400.444722ms for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.693278  255774 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092105  255774 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:10.092133  255774 pod_ready.go:86] duration metric: took 398.824976ms for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092146  255774 pod_ready.go:40] duration metric: took 1.604655578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:10.138628  255774 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:10.140593  255774 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-376255" cluster and "default" namespace by default
	I1121 14:30:08.754284  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.986586875s)
	W1121 14:30:08.754342  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1121 14:30:08.754352  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:08.754366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:08.789119  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:08.789149  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:08.842933  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:08.842974  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:08.880878  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:08.880919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:08.910920  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:08.910953  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.440020  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:11.440496  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:11.440556  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:11.440601  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:11.472645  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:11.472669  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:11.472674  213058 cri.go:89] found id: ""
	I1121 14:30:11.472683  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:11.472748  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.478061  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.482946  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:11.483034  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:11.517693  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:11.517722  213058 cri.go:89] found id: ""
	I1121 14:30:11.517732  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:11.517797  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.523621  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:11.523699  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:11.559155  213058 cri.go:89] found id: ""
	I1121 14:30:11.559194  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.559204  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:11.559212  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:11.559271  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:11.595093  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.595127  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:11.595133  213058 cri.go:89] found id: ""
	I1121 14:30:11.595143  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:11.595194  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.600085  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.604973  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:11.605048  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:11.639606  213058 cri.go:89] found id: ""
	I1121 14:30:11.639636  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.639647  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:11.639653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:11.639713  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:11.684373  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.684400  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.684405  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.684410  213058 cri.go:89] found id: ""
	I1121 14:30:11.684421  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:11.684482  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.689732  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.695253  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.701315  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:11.701388  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:11.732802  213058 cri.go:89] found id: ""
	I1121 14:30:11.732831  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.732841  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:11.732848  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:11.732907  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:11.761686  213058 cri.go:89] found id: ""
	I1121 14:30:11.761717  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.761729  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:11.761741  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:11.761756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.816634  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:11.816670  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.846024  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:11.846055  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.876932  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:11.876964  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.912984  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:11.913018  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:11.965381  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:11.965423  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:11.997477  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:11.997509  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:12.011497  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:12.011524  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:12.071024  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:12.071049  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:12.071065  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:12.106865  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:12.106898  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:12.141245  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:12.141276  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:12.176551  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:12.176600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:12.268742  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:12.268780  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	W1121 14:30:10.897620  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	I1121 14:30:11.398100  252125 node_ready.go:49] node "no-preload-921956" is "Ready"
	I1121 14:30:11.398128  252125 node_ready.go:38] duration metric: took 14.003530083s for node "no-preload-921956" to be "Ready" ...
	I1121 14:30:11.398142  252125 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:11.398195  252125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:11.412043  252125 api_server.go:72] duration metric: took 14.35241025s to wait for apiserver process to appear ...
	I1121 14:30:11.412070  252125 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:11.412087  252125 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:30:11.417254  252125 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:30:11.418517  252125 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:11.418570  252125 api_server.go:131] duration metric: took 6.492303ms to wait for apiserver health ...
	I1121 14:30:11.418581  252125 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:11.421927  252125 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:11.422024  252125 system_pods.go:61] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.422034  252125 system_pods.go:61] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.422047  252125 system_pods.go:61] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.422059  252125 system_pods.go:61] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.422069  252125 system_pods.go:61] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.422073  252125 system_pods.go:61] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.422077  252125 system_pods.go:61] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.422082  252125 system_pods.go:61] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.422094  252125 system_pods.go:74] duration metric: took 3.505153ms to wait for pod list to return data ...
	I1121 14:30:11.422109  252125 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:11.424685  252125 default_sa.go:45] found service account: "default"
	I1121 14:30:11.424710  252125 default_sa.go:55] duration metric: took 2.591611ms for default service account to be created ...
	I1121 14:30:11.424722  252125 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:11.427627  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.427680  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.427689  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.427703  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.427713  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.427721  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.427726  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.427731  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.427737  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.427768  252125 retry.go:31] will retry after 234.428318ms: missing components: kube-dns
	I1121 14:30:11.669788  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.669831  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.669840  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.669850  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.669858  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.669865  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.669871  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.669877  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.669893  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.669919  252125 retry.go:31] will retry after 250.085803ms: missing components: kube-dns
	I1121 14:30:11.924517  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.924602  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.924614  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.924627  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.924633  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.924642  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.924647  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.924653  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.924661  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.924682  252125 retry.go:31] will retry after 441.862758ms: missing components: kube-dns
	I1121 14:30:12.371065  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.371110  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:12.371122  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.371131  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.371136  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.371142  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.371147  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.371158  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.371170  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:12.371189  252125 retry.go:31] will retry after 502.578888ms: missing components: kube-dns
	I1121 14:30:12.879209  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.879243  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Running
	I1121 14:30:12.879249  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.879253  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.879258  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.879268  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.879271  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.879275  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.879278  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Running
	I1121 14:30:12.879289  252125 system_pods.go:126] duration metric: took 1.454561179s to wait for k8s-apps to be running ...
	I1121 14:30:12.879301  252125 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:12.879351  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:12.894061  252125 system_svc.go:56] duration metric: took 14.74714ms WaitForService to wait for kubelet
	I1121 14:30:12.894092  252125 kubeadm.go:587] duration metric: took 15.834465857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:12.894115  252125 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:12.897599  252125 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:12.897630  252125 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:12.897641  252125 node_conditions.go:105] duration metric: took 3.520753ms to run NodePressure ...
	I1121 14:30:12.897652  252125 start.go:242] waiting for startup goroutines ...
	I1121 14:30:12.897659  252125 start.go:247] waiting for cluster config update ...
	I1121 14:30:12.897669  252125 start.go:256] writing updated cluster config ...
	I1121 14:30:12.897983  252125 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:12.902897  252125 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:12.906562  252125 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.912263  252125 pod_ready.go:94] pod "coredns-66bc5c9577-s4rzb" is "Ready"
	I1121 14:30:12.912286  252125 pod_ready.go:86] duration metric: took 5.702456ms for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.915190  252125 pod_ready.go:83] waiting for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.919870  252125 pod_ready.go:94] pod "etcd-no-preload-921956" is "Ready"
	I1121 14:30:12.919896  252125 pod_ready.go:86] duration metric: took 4.68423ms for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.921926  252125 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.925984  252125 pod_ready.go:94] pod "kube-apiserver-no-preload-921956" is "Ready"
	I1121 14:30:12.926012  252125 pod_ready.go:86] duration metric: took 4.065762ms for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.928283  252125 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.307608  252125 pod_ready.go:94] pod "kube-controller-manager-no-preload-921956" is "Ready"
	I1121 14:30:13.307639  252125 pod_ready.go:86] duration metric: took 379.335151ms for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.508229  252125 pod_ready.go:83] waiting for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.907070  252125 pod_ready.go:94] pod "kube-proxy-wmx7z" is "Ready"
	I1121 14:30:13.907101  252125 pod_ready.go:86] duration metric: took 398.843128ms for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.108040  252125 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507264  252125 pod_ready.go:94] pod "kube-scheduler-no-preload-921956" is "Ready"
	I1121 14:30:14.507293  252125 pod_ready.go:86] duration metric: took 399.219492ms for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507307  252125 pod_ready.go:40] duration metric: took 1.604362709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:14.554506  252125 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:14.556366  252125 out.go:179] * Done! kubectl is now configured to use "no-preload-921956" cluster and "default" namespace by default
	I1121 14:30:14.802507  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:14.803048  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:14.803100  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:14.803156  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:14.832438  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:14.832464  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:14.832469  213058 cri.go:89] found id: ""
	I1121 14:30:14.832479  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:14.832560  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.836869  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.840970  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:14.841027  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:14.869276  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:14.869297  213058 cri.go:89] found id: ""
	I1121 14:30:14.869306  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:14.869364  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.873530  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:14.873616  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:14.902293  213058 cri.go:89] found id: ""
	I1121 14:30:14.902325  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.902336  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:14.902343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:14.902396  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:14.931422  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:14.931444  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:14.931448  213058 cri.go:89] found id: ""
	I1121 14:30:14.931455  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:14.931507  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.936188  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.940673  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:14.940742  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:14.969277  213058 cri.go:89] found id: ""
	I1121 14:30:14.969308  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.969320  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:14.969328  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:14.969386  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:14.999162  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:14.999190  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:14.999195  213058 cri.go:89] found id: ""
	I1121 14:30:14.999209  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:14.999275  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.003627  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.008044  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:15.008149  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:15.036025  213058 cri.go:89] found id: ""
	I1121 14:30:15.036050  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.036061  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:15.036069  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:15.036123  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:15.064814  213058 cri.go:89] found id: ""
	I1121 14:30:15.064840  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.064851  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:15.064863  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:15.064877  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:15.105369  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:15.105412  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:15.145479  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:15.145521  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:15.186460  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:15.186498  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:15.233156  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:15.233196  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:15.328776  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:15.328824  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:15.343510  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:15.343556  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:15.375919  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:15.375959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:15.412267  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:15.412310  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:15.467388  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:15.467422  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:15.495400  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:15.495451  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:15.527880  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:15.527906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:15.589380  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.090626  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:18.091055  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:18.091106  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:18.091154  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:18.119750  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.119777  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:18.119781  213058 cri.go:89] found id: ""
	I1121 14:30:18.119788  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:18.119846  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.124441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.128481  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:18.128574  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:18.155968  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:18.155990  213058 cri.go:89] found id: ""
	I1121 14:30:18.156000  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:18.156056  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.160457  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:18.160529  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:18.191869  213058 cri.go:89] found id: ""
	I1121 14:30:18.191899  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.191909  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:18.191916  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:18.191990  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:18.222614  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:18.222639  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.222644  213058 cri.go:89] found id: ""
	I1121 14:30:18.222653  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:18.222710  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.227248  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.231976  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:18.232054  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:18.261651  213058 cri.go:89] found id: ""
	I1121 14:30:18.261686  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.261696  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:18.261703  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:18.261756  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:18.293248  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:18.293277  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:18.293283  213058 cri.go:89] found id: ""
	I1121 14:30:18.293291  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:18.293360  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.297988  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.302375  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:18.302444  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:18.331900  213058 cri.go:89] found id: ""
	I1121 14:30:18.331976  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.331989  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:18.331997  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:18.332053  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:18.362314  213058 cri.go:89] found id: ""
	I1121 14:30:18.362341  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.362351  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:18.362363  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:18.362378  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.401362  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:18.401403  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:18.453554  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:18.453597  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:18.470719  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:18.470750  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:18.535220  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.535241  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:18.535255  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.572460  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:18.572490  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	db852415ef1dc       56cc512116c8f       7 seconds ago       Running             busybox                   0                   e54fe86273872       busybox                                                default
	503bfdf03cf92       52546a367cc9e       13 seconds ago      Running             coredns                   0                   90307d29a5634       coredns-66bc5c9577-fr27b                               kube-system
	72566b31204f1       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   c822919b946f5       storage-provisioner                                    kube-system
	5ae0b8683c837       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   c07d35ce51347       kindnet-cdzd4                                          kube-system
	482b9bb196494       fc25172553d79       24 seconds ago      Running             kube-proxy                0                   793cf2292079a       kube-proxy-hdplf                                       kube-system
	d4b4acbfed098       c80c8dbafe7dd       35 seconds ago      Running             kube-controller-manager   0                   5996271748c58       kube-controller-manager-default-k8s-diff-port-376255   kube-system
	0167abb93fad5       5f1f5298c888d       35 seconds ago      Running             etcd                      0                   5677c92bba15d       etcd-default-k8s-diff-port-376255                      kube-system
	049e7f927287c       7dd6aaa1717ab       35 seconds ago      Running             kube-scheduler            0                   e6e8ff5f9a760       kube-scheduler-default-k8s-diff-port-376255            kube-system
	d3f63cf7e2378       c3994bc696102       35 seconds ago      Running             kube-apiserver            0                   8dfe7b46f28da       kube-apiserver-default-k8s-diff-port-376255            kube-system
	
	
	==> containerd <==
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.417258305Z" level=info msg="CreateContainer within sandbox \"c822919b946f5084228dedf9bcff448780d4c1d0f9bb88544bec381ec181e4b4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83\""
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.418098732Z" level=info msg="StartContainer for \"72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83\""
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.419052809Z" level=info msg="connecting to shim 72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83" address="unix:///run/containerd/s/ea06ee1969c69f41a158dafd695d145ae6a2522a693ddcad561ea53000bcae67" protocol=ttrpc version=3
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.422837003Z" level=info msg="Container 503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.432317405Z" level=info msg="CreateContainer within sandbox \"90307d29a563415a13a6efc9e6611bdfa8459eab6a4193ce269e2c075d2e77c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786\""
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.433885617Z" level=info msg="StartContainer for \"503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786\""
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.435002526Z" level=info msg="connecting to shim 503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786" address="unix:///run/containerd/s/e4364fe920f744c3ba1c981b59ff648e4b672f006c8b2ce6a982c700c058a032" protocol=ttrpc version=3
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.475458464Z" level=info msg="StartContainer for \"72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83\" returns successfully"
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.483417325Z" level=info msg="StartContainer for \"503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786\" returns successfully"
	Nov 21 14:30:10 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:10.625630528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e6d82a47-2d60-4b9a-8e47-37d867b92b64,Namespace:default,Attempt:0,}"
	Nov 21 14:30:10 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:10.665247296Z" level=info msg="connecting to shim e54fe862738726b4a20f4534960ca579dd1eebd8f039b9e8eb7a64ec18185c30" address="unix:///run/containerd/s/7683fa59a762f604c3ba440e18606922538b08060da5f003cc83fc10a8b41128" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:30:10 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:10.735678986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e6d82a47-2d60-4b9a-8e47-37d867b92b64,Namespace:default,Attempt:0,} returns sandbox id \"e54fe862738726b4a20f4534960ca579dd1eebd8f039b9e8eb7a64ec18185c30\""
	Nov 21 14:30:10 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:10.737883752Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.002632475Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.003520464Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.004959871Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.007088408Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.007589904Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.269662828s"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.007636702Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.012526795Z" level=info msg="CreateContainer within sandbox \"e54fe862738726b4a20f4534960ca579dd1eebd8f039b9e8eb7a64ec18185c30\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.020535489Z" level=info msg="Container db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.029170044Z" level=info msg="CreateContainer within sandbox \"e54fe862738726b4a20f4534960ca579dd1eebd8f039b9e8eb7a64ec18185c30\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75\""
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.029914495Z" level=info msg="StartContainer for \"db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75\""
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.030935590Z" level=info msg="connecting to shim db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75" address="unix:///run/containerd/s/7683fa59a762f604c3ba440e18606922538b08060da5f003cc83fc10a8b41128" protocol=ttrpc version=3
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.089294399Z" level=info msg="StartContainer for \"db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75\" returns successfully"
	
	
	==> coredns [503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34200 - 39323 "HINFO IN 5503388865233133299.8183971682332353198. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.096214955s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-376255
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-376255
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-376255
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_29_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:29:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-376255
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:30:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:30:06 +0000   Fri, 21 Nov 2025 14:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:30:06 +0000   Fri, 21 Nov 2025 14:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:30:06 +0000   Fri, 21 Nov 2025 14:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:30:06 +0000   Fri, 21 Nov 2025 14:30:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-376255
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                36196da5-e221-443f-ae48-9567a40a96a8
	  Boot ID:                    f900700b-0668-4d24-87ff-85e15fbda365
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-fr27b                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-376255                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-cdzd4                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-376255             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-376255    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-hdplf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-376255             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x7 over 36s)  kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node default-k8s-diff-port-376255 event: Registered Node default-k8s-diff-port-376255 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-376255 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001887] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.440508] i8042: Warning: Keylock active
	[  +0.011202] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526419] block sda: the capability attribute has been deprecated.
	[  +0.095215] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027093] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.485024] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [0167abb93fad5a96138057402ea72b2bbbac6460847560456f81c3e61a226b4f] <==
	{"level":"warn","ts":"2025-11-21T14:29:46.391480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.401325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.445535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.457263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.467962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.479670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.491145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.500690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.511596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.529759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.541910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.553246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.567893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.576669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.586761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.597014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.607480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.619647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.628355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.648906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.658906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.678319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.689570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.702820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.796116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55500","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:30:20 up  1:12,  0 user,  load average: 4.09, 3.08, 1.94
	Linux default-k8s-diff-port-376255 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ae0b8683c8370d5c74a38ec1a8996128b935a4e574cd9f20d9213a154813db9] <==
	I1121 14:29:56.575119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:29:56.575390       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:29:56.575585       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:29:56.575602       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:29:56.575621       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:29:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:29:56.873269       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:29:56.873300       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:29:56.873314       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:29:56.873899       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:29:57.174789       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:29:57.174824       1 metrics.go:72] Registering metrics
	I1121 14:29:57.174874       1 controller.go:711] "Syncing nftables rules"
	I1121 14:30:06.876368       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:30:06.876471       1 main.go:301] handling current node
	I1121 14:30:16.874272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:30:16.874308       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d3f63cf7e2378b1cd63984e31c6b646308b750ea8cc070ff57b3cee65a92c4db] <==
	I1121 14:29:47.489457       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:29:47.489469       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:29:47.489484       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:29:47.489491       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:29:47.494595       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:29:47.503871       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:47.524079       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:29:48.386698       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:29:48.390984       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:29:48.391006       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:29:49.045280       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:29:49.087410       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:29:49.187749       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:29:49.193901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:29:49.195067       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:29:49.199941       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:29:49.402735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:29:50.042816       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:29:50.055664       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:29:50.067332       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:29:54.607808       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:54.612744       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:55.106555       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:29:55.455001       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1121 14:30:19.423999       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:53806: use of closed network connection
	
	
	==> kube-controller-manager [d4b4acbfed0989aceacf5589cec62c91cea975b67f5a3ae6feb60ef411e8095e] <==
	I1121 14:29:54.400580       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:29:54.401694       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:29:54.401750       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:29:54.402151       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:29:54.402224       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:29:54.402283       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:29:54.402677       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:29:54.402691       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:29:54.403205       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:29:54.402823       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:29:54.403570       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:29:54.404715       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:29:54.408012       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:29:54.410310       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:29:54.410378       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:29:54.420705       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:29:54.420790       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:29:54.420955       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:29:54.420969       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:29:54.420987       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:29:54.428224       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:29:54.430662       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:29:54.431597       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-376255" podCIDRs=["10.244.0.0/24"]
	I1121 14:29:54.434636       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:30:09.349487       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [482b9bb19649402137bebb046dcd7e73f5411dcc7697d3a5b2a9fffd9e7ccf16] <==
	I1121 14:29:56.087038       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:29:56.172292       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:29:56.272394       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:29:56.272432       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:29:56.272615       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:29:56.297614       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:29:56.297678       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:29:56.303209       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:29:56.303624       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:29:56.303656       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:29:56.304850       1 config.go:200] "Starting service config controller"
	I1121 14:29:56.304884       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:29:56.304887       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:29:56.304923       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:29:56.304926       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:29:56.304945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:29:56.304970       1 config.go:309] "Starting node config controller"
	I1121 14:29:56.304976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:29:56.405272       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:29:56.405315       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:29:56.405323       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:29:56.405341       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [049e7f927287c0eda41eb968ee81714a27b377f233379aa501e22da2bc6fb72e] <==
	E1121 14:29:47.505907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:29:47.505987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:29:47.506074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:29:47.506135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:29:47.506342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:29:47.506415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:29:47.506461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:29:47.506574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:29:47.507660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:29:47.508397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:29:47.510084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:29:48.328160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:29:48.332628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:29:48.426401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:29:48.428709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:29:48.532292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:29:48.552036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:29:48.554522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:29:48.702784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:29:48.717485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:29:48.770253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 14:29:48.790559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:29:48.798155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:29:48.802861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1121 14:29:50.698431       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:29:50 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:50.945498    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-376255" podStartSLOduration=1.945482685 podStartE2EDuration="1.945482685s" podCreationTimestamp="2025-11-21 14:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:50.945013076 +0000 UTC m=+1.135610121" watchObservedRunningTime="2025-11-21 14:29:50.945482685 +0000 UTC m=+1.136079730"
	Nov 21 14:29:50 default-k8s-diff-port-376255 kubelet[1434]: E1121 14:29:50.953777    1434 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-376255\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-376255"
	Nov 21 14:29:50 default-k8s-diff-port-376255 kubelet[1434]: E1121 14:29:50.954123    1434 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-376255\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-376255"
	Nov 21 14:29:50 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:50.964987    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-376255" podStartSLOduration=0.96496409 podStartE2EDuration="964.96409ms" podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:50.963604357 +0000 UTC m=+1.154201384" watchObservedRunningTime="2025-11-21 14:29:50.96496409 +0000 UTC m=+1.155561135"
	Nov 21 14:29:54 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:54.478889    1434 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:29:54 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:54.479768    1434 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542417    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9xw\" (UniqueName: \"kubernetes.io/projected/f4b8f54c-361f-4748-9f31-92ffb753f404-kube-api-access-fz9xw\") pod \"kube-proxy-hdplf\" (UID: \"f4b8f54c-361f-4748-9f31-92ffb753f404\") " pod="kube-system/kube-proxy-hdplf"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542480    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f954f962-f79a-49e5-8b79-5fbd3c544ffc-cni-cfg\") pod \"kindnet-cdzd4\" (UID: \"f954f962-f79a-49e5-8b79-5fbd3c544ffc\") " pod="kube-system/kindnet-cdzd4"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542509    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f954f962-f79a-49e5-8b79-5fbd3c544ffc-lib-modules\") pod \"kindnet-cdzd4\" (UID: \"f954f962-f79a-49e5-8b79-5fbd3c544ffc\") " pod="kube-system/kindnet-cdzd4"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542534    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qlx6\" (UniqueName: \"kubernetes.io/projected/f954f962-f79a-49e5-8b79-5fbd3c544ffc-kube-api-access-5qlx6\") pod \"kindnet-cdzd4\" (UID: \"f954f962-f79a-49e5-8b79-5fbd3c544ffc\") " pod="kube-system/kindnet-cdzd4"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542593    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b8f54c-361f-4748-9f31-92ffb753f404-xtables-lock\") pod \"kube-proxy-hdplf\" (UID: \"f4b8f54c-361f-4748-9f31-92ffb753f404\") " pod="kube-system/kube-proxy-hdplf"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542609    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f954f962-f79a-49e5-8b79-5fbd3c544ffc-xtables-lock\") pod \"kindnet-cdzd4\" (UID: \"f954f962-f79a-49e5-8b79-5fbd3c544ffc\") " pod="kube-system/kindnet-cdzd4"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542628    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4b8f54c-361f-4748-9f31-92ffb753f404-kube-proxy\") pod \"kube-proxy-hdplf\" (UID: \"f4b8f54c-361f-4748-9f31-92ffb753f404\") " pod="kube-system/kube-proxy-hdplf"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542652    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b8f54c-361f-4748-9f31-92ffb753f404-lib-modules\") pod \"kube-proxy-hdplf\" (UID: \"f4b8f54c-361f-4748-9f31-92ffb753f404\") " pod="kube-system/kube-proxy-hdplf"
	Nov 21 14:29:56 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:56.980508    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hdplf" podStartSLOduration=1.980488013 podStartE2EDuration="1.980488013s" podCreationTimestamp="2025-11-21 14:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:56.980288351 +0000 UTC m=+7.170885396" watchObservedRunningTime="2025-11-21 14:29:56.980488013 +0000 UTC m=+7.171085057"
	Nov 21 14:29:56 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:56.980681    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cdzd4" podStartSLOduration=1.980672067 podStartE2EDuration="1.980672067s" podCreationTimestamp="2025-11-21 14:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:56.968815157 +0000 UTC m=+7.159412203" watchObservedRunningTime="2025-11-21 14:29:56.980672067 +0000 UTC m=+7.171269111"
	Nov 21 14:30:06 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:06.960724    1434 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:30:07 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:07.025858    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aecd7b98-657f-464e-9860-d060714bbc5d-config-volume\") pod \"coredns-66bc5c9577-fr27b\" (UID: \"aecd7b98-657f-464e-9860-d060714bbc5d\") " pod="kube-system/coredns-66bc5c9577-fr27b"
	Nov 21 14:30:07 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:07.025901    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxkj\" (UniqueName: \"kubernetes.io/projected/4fa1d228-0310-45d2-87b6-91ce085f1f58-kube-api-access-hnxkj\") pod \"storage-provisioner\" (UID: \"4fa1d228-0310-45d2-87b6-91ce085f1f58\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:07 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:07.025941    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlfl\" (UniqueName: \"kubernetes.io/projected/aecd7b98-657f-464e-9860-d060714bbc5d-kube-api-access-2wlfl\") pod \"coredns-66bc5c9577-fr27b\" (UID: \"aecd7b98-657f-464e-9860-d060714bbc5d\") " pod="kube-system/coredns-66bc5c9577-fr27b"
	Nov 21 14:30:07 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:07.025973    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4fa1d228-0310-45d2-87b6-91ce085f1f58-tmp\") pod \"storage-provisioner\" (UID: \"4fa1d228-0310-45d2-87b6-91ce085f1f58\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:08 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:08.024337    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.024313781 podStartE2EDuration="12.024313781s" podCreationTimestamp="2025-11-21 14:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:08.024165656 +0000 UTC m=+18.214762699" watchObservedRunningTime="2025-11-21 14:30:08.024313781 +0000 UTC m=+18.214910826"
	Nov 21 14:30:08 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:08.024505    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fr27b" podStartSLOduration=13.02449524 podStartE2EDuration="13.02449524s" podCreationTimestamp="2025-11-21 14:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:08.01196088 +0000 UTC m=+18.202557939" watchObservedRunningTime="2025-11-21 14:30:08.02449524 +0000 UTC m=+18.215092285"
	Nov 21 14:30:10 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:10.350273    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5c7\" (UniqueName: \"kubernetes.io/projected/e6d82a47-2d60-4b9a-8e47-37d867b92b64-kube-api-access-zt5c7\") pod \"busybox\" (UID: \"e6d82a47-2d60-4b9a-8e47-37d867b92b64\") " pod="default/busybox"
	Nov 21 14:30:14 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:14.014319    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.74306566 podStartE2EDuration="4.014298186s" podCreationTimestamp="2025-11-21 14:30:10 +0000 UTC" firstStartedPulling="2025-11-21 14:30:10.737415699 +0000 UTC m=+20.928012736" lastFinishedPulling="2025-11-21 14:30:13.008648225 +0000 UTC m=+23.199245262" observedRunningTime="2025-11-21 14:30:14.014088039 +0000 UTC m=+24.204685088" watchObservedRunningTime="2025-11-21 14:30:14.014298186 +0000 UTC m=+24.204895230"
	
	
	==> storage-provisioner [72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83] <==
	I1121 14:30:07.485611       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:30:07.494496       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:30:07.494563       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:30:07.496836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:07.502215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:30:07.502370       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:30:07.502572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-376255_01e4b301-4ab2-4e88-90be-8213872d2096!
	I1121 14:30:07.503060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c8a28cf-d14c-42de-b72a-faa3b4f36feb", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-376255_01e4b301-4ab2-4e88-90be-8213872d2096 became leader
	W1121 14:30:07.510786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:07.514054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:30:07.603340       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-376255_01e4b301-4ab2-4e88-90be-8213872d2096!
	W1121 14:30:09.517386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:09.523088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:11.528136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:11.533792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:13.537345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:13.541698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:15.545533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:15.550374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:17.554459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:17.560214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:19.563656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:19.568461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-376255 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-376255
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-376255:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6",
	        "Created": "2025-11-21T14:29:32.009081088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257784,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:29:32.068439596Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6/hostname",
	        "HostsPath": "/var/lib/docker/containers/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6/hosts",
	        "LogPath": "/var/lib/docker/containers/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6/61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6-json.log",
	        "Name": "/default-k8s-diff-port-376255",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-376255:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-376255",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "61c87ca973c0a4e277f25b12adbf76161cef17709fcfc19c44e8b5cb016b7cc6",
	                "LowerDir": "/var/lib/docker/overlay2/d47e2ba9d0651c4ea883e5bf100c225e4b05e3e5505fc143f634d6ecb551fb9e-init/diff:/var/lib/docker/overlay2/a649757dd9587fa5a20ca8a56ec1923099f2a5e912dc7e8e1dfa08e79248b59f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d47e2ba9d0651c4ea883e5bf100c225e4b05e3e5505fc143f634d6ecb551fb9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d47e2ba9d0651c4ea883e5bf100c225e4b05e3e5505fc143f634d6ecb551fb9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d47e2ba9d0651c4ea883e5bf100c225e4b05e3e5505fc143f634d6ecb551fb9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-376255",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-376255/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-376255",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-376255",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-376255",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0a24621d720643b3fcc29e1e4e073681c8649e0d7d5f8233994b273a41233ead",
	            "SandboxKey": "/var/run/docker/netns/0a24621d7206",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-376255": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "25d9d9bd67c8277f14a165b0389b03608121b262dc0482f5f0c6cce668c1cfe5",
	                    "EndpointID": "99e8c973752335e26b21d966b72adfcdadf31879bb82aa32ab6520519ebe814c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "4e:7c:cf:18:0f:23",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-376255",
	                        "61c87ca973c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-376255 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-376255 logs -n 25: (1.367444195s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-459127 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo containerd config dump                                                                                                                                                                                                        │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cert-expiration-371956                                                                                                                                                                                                                           │ cert-expiration-371956       │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ -p cilium-459127 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo crio config                                                                                                                                                                                                                   │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cilium-459127                                                                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ start   │ -p cert-options-733993 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p force-systemd-flag-730471 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p NoKubernetes-187733 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │                     │
	│ delete  │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ cert-options-733993 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p cert-options-733993 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p cert-options-733993                                                                                                                                                                                                                              │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ force-systemd-flag-730471 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p force-systemd-flag-730471                                                                                                                                                                                                                        │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-012258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:29:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:29:24.877938  255774 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:29:24.878133  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.878179  255774 out.go:374] Setting ErrFile to fd 2...
	I1121 14:29:24.878200  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.879901  255774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:29:24.881344  255774 out.go:368] Setting JSON to false
	I1121 14:29:24.883254  255774 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4307,"bootTime":1763731058,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:29:24.883372  255774 start.go:143] virtualization: kvm guest
	I1121 14:29:24.885483  255774 out.go:179] * [default-k8s-diff-port-376255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:29:24.887201  255774 notify.go:221] Checking for updates...
	I1121 14:29:24.887242  255774 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:29:24.890729  255774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:29:24.892963  255774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:24.894677  255774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:29:24.897870  255774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:29:24.899765  255774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:29:24.902854  255774 config.go:182] Loaded profile config "kubernetes-upgrade-797080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903030  255774 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903162  255774 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:24.903312  255774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:29:24.939143  255774 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:29:24.939248  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.025144  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.01035373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.025295  255774 docker.go:319] overlay module found
	I1121 14:29:25.027378  255774 out.go:179] * Using the docker driver based on user configuration
	I1121 14:29:22.611340  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.611365  249617 ubuntu.go:182] provisioning hostname "old-k8s-version-012258"
	I1121 14:29:22.611426  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.635589  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.635869  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.635891  249617 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-012258 && echo "old-k8s-version-012258" | sudo tee /etc/hostname
	I1121 14:29:22.796661  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.796754  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.822578  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.822834  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.822860  249617 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-012258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-012258/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-012258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:22.970644  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:22.970676  249617 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:22.970732  249617 ubuntu.go:190] setting up certificates
	I1121 14:29:22.970743  249617 provision.go:84] configureAuth start
	I1121 14:29:22.970826  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:22.991118  249617 provision.go:143] copyHostCerts
	I1121 14:29:22.991183  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:22.991193  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:22.991250  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:22.991367  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:22.991381  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:22.991414  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:22.991488  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:22.991499  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:22.991526  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:22.991627  249617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-012258 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-012258]
	I1121 14:29:23.140756  249617 provision.go:177] copyRemoteCerts
	I1121 14:29:23.140833  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:23.140885  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.161751  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.269718  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:23.292619  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:29:23.314336  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:29:23.337086  249617 provision.go:87] duration metric: took 366.309314ms to configureAuth
	I1121 14:29:23.337129  249617 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:23.337306  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:23.337320  249617 machine.go:97] duration metric: took 3.89496072s to provisionDockerMachine
	I1121 14:29:23.337326  249617 client.go:176] duration metric: took 11.527957207s to LocalClient.Create
	I1121 14:29:23.337344  249617 start.go:167] duration metric: took 11.528071392s to libmachine.API.Create "old-k8s-version-012258"
	I1121 14:29:23.337352  249617 start.go:293] postStartSetup for "old-k8s-version-012258" (driver="docker")
	I1121 14:29:23.337365  249617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:23.337422  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:23.337471  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.359217  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.466089  249617 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:23.470146  249617 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:23.470174  249617 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:23.470185  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:23.470249  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:23.470349  249617 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:23.470480  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:23.479086  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:23.506776  249617 start.go:296] duration metric: took 169.402964ms for postStartSetup
	I1121 14:29:23.507166  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.527044  249617 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/config.json ...
	I1121 14:29:23.527374  249617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:23.527425  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.546669  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.645314  249617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:23.650498  249617 start.go:128] duration metric: took 11.844529266s to createHost
	I1121 14:29:23.650523  249617 start.go:83] releasing machines lock for "old-k8s-version-012258", held for 11.844683904s
	I1121 14:29:23.650592  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.671161  249617 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:23.671227  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.671321  249617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:23.671403  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.694189  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.694196  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.856609  249617 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:23.863273  249617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:23.867917  249617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:23.867991  249617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:23.895679  249617 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:23.895707  249617 start.go:496] detecting cgroup driver to use...
	I1121 14:29:23.895742  249617 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:23.895805  249617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:23.911897  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:23.925350  249617 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:23.925400  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:23.943424  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:23.962675  249617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:24.059689  249617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:24.169263  249617 docker.go:234] disabling docker service ...
	I1121 14:29:24.169325  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:24.191949  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:24.206181  249617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:24.319402  249617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:24.455060  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:24.472888  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:24.497138  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1121 14:29:24.524424  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:24.536491  249617 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:24.536702  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:24.547193  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.559919  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:24.571627  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.581977  249617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:24.629839  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:24.640310  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:24.650595  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:24.660801  249617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:24.669493  249617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:24.677810  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:24.781513  249617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:24.929576  249617 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:24.929707  249617 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:24.936782  249617 start.go:564] Will wait 60s for crictl version
	I1121 14:29:24.936893  249617 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.942453  249617 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:24.986447  249617 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:24.986527  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.018021  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.051308  249617 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1121 14:29:25.029036  255774 start.go:309] selected driver: docker
	I1121 14:29:25.029056  255774 start.go:930] validating driver "docker" against <nil>
	I1121 14:29:25.029071  255774 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:29:25.029977  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.123370  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.11156096 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.123696  255774 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:29:25.124078  255774 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:29:25.125758  255774 out.go:179] * Using Docker driver with root privileges
	I1121 14:29:25.127166  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.127249  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.127262  255774 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:29:25.127353  255774 start.go:353] cluster config:
	{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:25.129454  255774 out.go:179] * Starting "default-k8s-diff-port-376255" primary control-plane node in "default-k8s-diff-port-376255" cluster
	I1121 14:29:25.130961  255774 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:29:25.132637  255774 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:29:25.134190  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:25.134237  255774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 14:29:25.134251  255774 cache.go:65] Caching tarball of preloaded images
	I1121 14:29:25.134262  255774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:29:25.134379  255774 preload.go:238] Found /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1121 14:29:25.134391  255774 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:29:25.134520  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:25.134560  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json: {Name:mk1db0ba6952ac549a7eae06783e73916a7ad392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.161339  255774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:29:25.161363  255774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:29:25.161384  255774 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:29:25.161419  255774 start.go:360] acquireMachinesLock for default-k8s-diff-port-376255: {Name:mka18b3ecaec4bae205bc7951f90400738bef300 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:29:25.161518  255774 start.go:364] duration metric: took 79.824µs to acquireMachinesLock for "default-k8s-diff-port-376255"
	I1121 14:29:25.161561  255774 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:25.161653  255774 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:29:25.055066  249617 cli_runner.go:164] Run: docker network inspect old-k8s-version-012258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.085953  249617 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:25.093859  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.111432  249617 kubeadm.go:884] updating cluster {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:25.111671  249617 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:29:25.111753  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.143860  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.143888  249617 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:25.143953  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.174770  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.174789  249617 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:25.174797  249617 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1121 14:29:25.174897  249617 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-012258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:25.174970  249617 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:25.211311  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.211341  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.211371  249617 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:25.211401  249617 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-012258 NodeName:old-k8s-version-012258 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:25.211596  249617 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-012258"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:25.211673  249617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:29:25.224124  249617 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:25.224202  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:25.235430  249617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1121 14:29:25.254181  249617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:25.283842  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1121 14:29:25.302971  249617 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:25.309092  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.325170  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:25.438037  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:25.469767  249617 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258 for IP: 192.168.94.2
	I1121 14:29:25.469790  249617 certs.go:195] generating shared ca certs ...
	I1121 14:29:25.469811  249617 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.470023  249617 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:25.470095  249617 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:25.470105  249617 certs.go:257] generating profile certs ...
	I1121 14:29:25.470177  249617 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key
	I1121 14:29:25.470199  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt with IP's: []
	I1121 14:29:25.634340  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt ...
	I1121 14:29:25.634374  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt: {Name:mk5e1a3132436dad740351857d527e3c45fff4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648586  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key ...
	I1121 14:29:25.648625  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key: {Name:mk757010d91a13b26eb1340def496546bee9bf26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648791  249617 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc
	I1121 14:29:25.648816  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1121 14:29:25.817862  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc ...
	I1121 14:29:25.817892  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc: {Name:mk8a482343e99af6e8bdd7e52a6e5b813685beb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818099  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc ...
	I1121 14:29:25.818121  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc: {Name:mk4cf761e884b2a77e105e39ad6b0495b59b5aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818237  249617 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt
	I1121 14:29:25.818331  249617 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key
	I1121 14:29:25.818390  249617 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key
	I1121 14:29:25.818406  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt with IP's: []
	I1121 14:29:26.390351  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt ...
	I1121 14:29:26.390391  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt: {Name:mk37207f300780275f6aa5331fc436d60739196c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390599  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key ...
	I1121 14:29:26.390617  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key: {Name:mkff5d416178c38a50235608b783c3957bee8456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390849  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:26.390898  249617 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:26.390913  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:26.390946  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:26.390988  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:26.391029  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:26.391086  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:26.391817  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:26.418450  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:26.446063  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:26.469197  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:26.493823  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:29:26.526847  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:26.555176  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:25.915600  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:25.916118  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:25.916177  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:25.916228  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:25.948057  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:25.948080  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:25.948087  213058 cri.go:89] found id: ""
	I1121 14:29:25.948096  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:25.948160  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.952634  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.956801  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:25.956870  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:25.990988  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:25.991014  213058 cri.go:89] found id: ""
	I1121 14:29:25.991024  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:25.991083  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.995665  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:25.995736  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:26.031577  213058 cri.go:89] found id: ""
	I1121 14:29:26.031604  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.031612  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:26.031618  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:26.031665  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:26.064880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.064907  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.064912  213058 cri.go:89] found id: ""
	I1121 14:29:26.064922  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:26.064979  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.070274  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.075659  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:26.075731  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:26.108079  213058 cri.go:89] found id: ""
	I1121 14:29:26.108108  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.108118  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:26.108125  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:26.108181  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:26.138988  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:26.139018  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.139024  213058 cri.go:89] found id: ""
	I1121 14:29:26.139034  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:26.139096  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.143487  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.147564  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:26.147631  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:26.185747  213058 cri.go:89] found id: ""
	I1121 14:29:26.185774  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.185785  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:26.185793  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:26.185848  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:26.220265  213058 cri.go:89] found id: ""
	I1121 14:29:26.220296  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.220308  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:26.220321  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:26.220335  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.265042  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:26.265072  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:26.402636  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:26.402672  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:26.484531  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:26.484565  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:26.484581  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:26.534239  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:26.534294  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:26.579971  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:26.580016  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:26.643693  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:26.643727  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:26.683712  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:26.683748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:26.702800  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:26.702836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:26.741813  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:26.741845  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.812944  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:26.812997  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.855307  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:26.855347  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:24.308535  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1121 14:29:24.308619  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.317176  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1121 14:29:24.317245  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.318774  252125 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1121 14:29:24.318825  252125 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.318867  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.328208  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1121 14:29:24.328249  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1121 14:29:24.328291  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.328305  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.328664  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1121 14:29:24.328708  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1121 14:29:24.335839  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1121 14:29:24.335900  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.337631  252125 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1121 14:29:24.337672  252125 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.337713  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.346363  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.346443  252125 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1121 14:29:24.346484  252125 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.346517  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361284  252125 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1121 14:29:24.361331  252125 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.361375  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361424  252125 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1121 14:29:24.361445  252125 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.361477  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.366787  252125 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1121 14:29:24.366831  252125 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:29:24.366871  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379457  252125 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1121 14:29:24.379503  252125 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.379558  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379677  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.388569  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.388608  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.388658  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.388681  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.388574  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.418705  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.418763  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.427350  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.434639  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.434777  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.437430  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.437452  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.477986  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.478027  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.478099  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:29:24.478334  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:24.478136  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.485019  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.485026  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.489362  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.521124  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.521651  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:29:24.521767  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:24.553384  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:29:24.553425  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1121 14:29:24.553522  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:29:24.553632  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:24.553699  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1121 14:29:24.553755  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.553769  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:29:24.553803  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:29:24.553853  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:24.553860  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:24.553893  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:29:24.553920  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:29:24.553945  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:24.553945  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1121 14:29:24.565027  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:29:24.565077  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1121 14:29:24.565153  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:29:24.565169  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1121 14:29:24.574297  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:29:24.574338  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1121 14:29:24.574363  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:29:24.574390  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:29:24.574393  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1121 14:29:24.574407  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1121 14:29:24.784169  252125 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.784246  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.964305  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:29:25.029557  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.029626  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.445459  252125 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1121 14:29:25.445578  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691152  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.661495413s)
	I1121 14:29:26.691188  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:29:26.691209  252125 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691206  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.245604103s)
	I1121 14:29:26.691250  252125 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1121 14:29:26.691264  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691297  252125 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691347  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.696141  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.404441617s)
	I1121 14:29:28.100696  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.409327822s)
	I1121 14:29:28.100767  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:29:28.100803  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.100853  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.132780  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:25.163849  255774 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:29:25.164318  255774 start.go:159] libmachine.API.Create for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:25.164395  255774 client.go:173] LocalClient.Create starting
	I1121 14:29:25.164513  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem
	I1121 14:29:25.164575  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164605  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.164704  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem
	I1121 14:29:25.164760  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164776  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.165330  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:29:25.188513  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:29:25.188614  255774 network_create.go:284] running [docker network inspect default-k8s-diff-port-376255] to gather additional debugging logs...
	I1121 14:29:25.188640  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255
	W1121 14:29:25.213297  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 returned with exit code 1
	I1121 14:29:25.213338  255774 network_create.go:287] error running [docker network inspect default-k8s-diff-port-376255]: docker network inspect default-k8s-diff-port-376255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-376255 not found
	I1121 14:29:25.213435  255774 network_create.go:289] output of [docker network inspect default-k8s-diff-port-376255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-376255 not found
	
	** /stderr **
	I1121 14:29:25.213589  255774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.240844  255774 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
	I1121 14:29:25.241874  255774 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39921db0d513 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:e4:85:98:a5:e3} reservation:<nil>}
	I1121 14:29:25.242975  255774 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-36a8741c90a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:21:99:72:63:4a} reservation:<nil>}
	I1121 14:29:25.244042  255774 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-63d543fc8bbd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:58:40:d2:33:c4} reservation:<nil>}
	I1121 14:29:25.245269  255774 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb46e0}
	I1121 14:29:25.245303  255774 network_create.go:124] attempt to create docker network default-k8s-diff-port-376255 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:29:25.245384  255774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 default-k8s-diff-port-376255
	I1121 14:29:25.322210  255774 network_create.go:108] docker network default-k8s-diff-port-376255 192.168.85.0/24 created
	I1121 14:29:25.322244  255774 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-376255" container
	I1121 14:29:25.322309  255774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:29:25.346732  255774 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-376255 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:29:25.374919  255774 oci.go:103] Successfully created a docker volume default-k8s-diff-port-376255
	I1121 14:29:25.374994  255774 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-376255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --entrypoint /usr/bin/test -v default-k8s-diff-port-376255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:29:26.343288  255774 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-376255
	I1121 14:29:26.343370  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:26.343387  255774 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:29:26.343457  255774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:29:26.582319  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:26.606403  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:26.635408  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:26.661287  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:26.686582  249617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:26.703157  249617 ssh_runner.go:195] Run: openssl version
	I1121 14:29:26.712353  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:26.725593  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732381  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732523  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.774823  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:26.785127  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:26.796035  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800685  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800751  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.842185  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:26.852632  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:26.863838  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869571  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869642  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.922017  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:26.934065  249617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:26.939457  249617 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:26.939526  249617 kubeadm.go:401] StartCluster: {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:26.939648  249617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:26.939710  249617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:26.978114  249617 cri.go:89] found id: ""
	I1121 14:29:26.978192  249617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:26.989363  249617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:27.000529  249617 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:27.000603  249617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:27.012158  249617 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:27.012179  249617 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:27.012231  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:27.022084  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:27.022141  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:27.034139  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:27.044897  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:27.045038  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:27.056593  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.066532  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:27.066615  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.077925  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:27.088254  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:27.088320  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:27.098442  249617 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:27.205509  249617 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:27.290009  249617 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:29.388121  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:29.388594  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:29.388645  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:29.388690  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:29.416964  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:29.416991  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.416996  213058 cri.go:89] found id: ""
	I1121 14:29:29.417006  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:29.417074  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.421476  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.425483  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:29.425557  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:29.453687  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:29.453708  213058 cri.go:89] found id: ""
	I1121 14:29:29.453718  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:29.453783  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.458267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:29.458353  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:29.485804  213058 cri.go:89] found id: ""
	I1121 14:29:29.485865  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.485876  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:29.485883  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:29.485940  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:29.514265  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:29.514290  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.514294  213058 cri.go:89] found id: ""
	I1121 14:29:29.514302  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:29.514349  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.518626  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.522446  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:29.522501  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:29.549770  213058 cri.go:89] found id: ""
	I1121 14:29:29.549799  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.549811  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:29.549819  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:29.549868  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:29.577193  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.577217  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.577222  213058 cri.go:89] found id: ""
	I1121 14:29:29.577230  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:29.577288  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.581256  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.585291  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:29.585347  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:29.614632  213058 cri.go:89] found id: ""
	I1121 14:29:29.614664  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.614674  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:29.614682  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:29.614740  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:29.645697  213058 cri.go:89] found id: ""
	I1121 14:29:29.645721  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.645730  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:29.645741  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:29.645756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.675578  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:29.675607  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:29.718952  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:29.718990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:29.750089  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:29.750117  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:29.858708  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:29.858738  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.902976  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:29.903013  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.938083  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:29.938118  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.976329  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:29.976366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:29.991448  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:29.991485  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:30.053990  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:30.054015  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:30.054032  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:30.089042  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:30.089076  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:30.124498  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:30.124528  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.685601  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:32.686035  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:32.686089  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:32.686144  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:32.744948  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:32.745095  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:32.745132  213058 cri.go:89] found id: ""
	I1121 14:29:32.745169  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:32.745355  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.752020  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.760837  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:32.761106  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:32.807418  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:32.807451  213058 cri.go:89] found id: ""
	I1121 14:29:32.807462  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:32.807521  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.813216  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:32.813289  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:32.852598  213058 cri.go:89] found id: ""
	I1121 14:29:32.852633  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.852645  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:32.852653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:32.852711  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:32.889120  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.889144  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:32.889148  213058 cri.go:89] found id: ""
	I1121 14:29:32.889157  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:32.889211  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.894834  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.900572  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:32.900646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:32.937810  213058 cri.go:89] found id: ""
	I1121 14:29:32.937836  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.937846  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:32.937853  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:32.937914  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:32.975713  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:32.975735  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:32.975741  213058 cri.go:89] found id: ""
	I1121 14:29:32.975751  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:32.975815  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.981574  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.985965  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:32.986030  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:33.019894  213058 cri.go:89] found id: ""
	I1121 14:29:33.019923  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.019935  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:33.019949  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:33.020009  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:33.051872  213058 cri.go:89] found id: ""
	I1121 14:29:33.051901  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.051911  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:33.051923  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:33.051937  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:33.103114  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:33.103153  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:33.142816  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:33.142846  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:33.209677  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:33.209736  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:33.255185  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:33.255220  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:33.272562  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:33.272600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:33.319098  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:33.319132  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:33.366245  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:33.366286  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:33.410624  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:33.410660  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:33.458217  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:33.458253  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:33.586879  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:33.586919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:29.835800  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.734910291s)
	I1121 14:29:29.835838  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:29:29.835860  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835902  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835802  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.702989246s)
	I1121 14:29:29.835965  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:29:29.836056  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:29.840842  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:29:29.840873  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1121 14:29:32.866902  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (3.030968163s)
	I1121 14:29:32.866941  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:29:32.866961  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:32.867002  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:31.901829  255774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.558304176s)
	I1121 14:29:31.901864  255774 kic.go:203] duration metric: took 5.558473353s to extract preloaded images to volume ...
	W1121 14:29:31.901941  255774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:29:31.901969  255774 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:29:31.902010  255774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:29:31.985847  255774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-376255 --name default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --network default-k8s-diff-port-376255 --ip 192.168.85.2 --volume default-k8s-diff-port-376255:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:29:32.403824  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Running}}
	I1121 14:29:32.427802  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.456228  255774 cli_runner.go:164] Run: docker exec default-k8s-diff-port-376255 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:29:32.514766  255774 oci.go:144] the created container "default-k8s-diff-port-376255" has a running status.
	I1121 14:29:32.514799  255774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa...
	I1121 14:29:32.829505  255774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:29:32.861911  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.888316  255774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:29:32.888342  255774 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-376255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:29:32.948121  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.975355  255774 machine.go:94] provisionDockerMachine start ...
	I1121 14:29:32.975799  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:33.002463  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:33.002813  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:33.002834  255774 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:29:33.003677  255774 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37682->127.0.0.1:33070: read: connection reset by peer
	I1121 14:29:37.228254  249617 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:29:37.228434  249617 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:37.228644  249617 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:37.228822  249617 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:37.228907  249617 kubeadm.go:319] OS: Linux
	I1121 14:29:37.228971  249617 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:37.229029  249617 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:37.229111  249617 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:37.229198  249617 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:37.229264  249617 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:37.229333  249617 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:37.229403  249617 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:37.229468  249617 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:37.229624  249617 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:37.229762  249617 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:37.229892  249617 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:29:37.230051  249617 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.235113  249617 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:37.235306  249617 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:37.235508  249617 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:37.235691  249617 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:37.235858  249617 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:37.236102  249617 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:37.236205  249617 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:37.236303  249617 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:37.236516  249617 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236607  249617 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:37.236765  249617 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236861  249617 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:37.236954  249617 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:37.237021  249617 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:37.237104  249617 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:37.237178  249617 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:37.237257  249617 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:37.237352  249617 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:37.237438  249617 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:37.237554  249617 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:37.237649  249617 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:37.239227  249617 out.go:252]   - Booting up control plane ...
	I1121 14:29:37.239369  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:37.239534  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:37.239682  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:37.239829  249617 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:37.239965  249617 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:37.240022  249617 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:37.240260  249617 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:29:37.240373  249617 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.503152 seconds
	I1121 14:29:37.240759  249617 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:37.240933  249617 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:37.241035  249617 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:37.241286  249617 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-012258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:37.241409  249617 kubeadm.go:319] [bootstrap-token] Using token: yix385.n0xejrlt7sdx1ngs
	I1121 14:29:37.243198  249617 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:37.243379  249617 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:37.243497  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:37.243755  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:37.243946  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:37.244147  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:37.244287  249617 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:37.244477  249617 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:37.244564  249617 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:37.244632  249617 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:37.244642  249617 kubeadm.go:319] 
	I1121 14:29:37.244725  249617 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:37.244736  249617 kubeadm.go:319] 
	I1121 14:29:37.244834  249617 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:37.244845  249617 kubeadm.go:319] 
	I1121 14:29:37.244877  249617 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:37.244966  249617 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:37.245033  249617 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:37.245045  249617 kubeadm.go:319] 
	I1121 14:29:37.245111  249617 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:37.245120  249617 kubeadm.go:319] 
	I1121 14:29:37.245178  249617 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:37.245192  249617 kubeadm.go:319] 
	I1121 14:29:37.245274  249617 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:37.245371  249617 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:37.245468  249617 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:37.245476  249617 kubeadm.go:319] 
	I1121 14:29:37.245604  249617 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:37.245734  249617 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:37.245755  249617 kubeadm.go:319] 
	I1121 14:29:37.245866  249617 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246024  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:37.246062  249617 kubeadm.go:319] 	--control-plane 
	I1121 14:29:37.246072  249617 kubeadm.go:319] 
	I1121 14:29:37.246178  249617 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:37.246189  249617 kubeadm.go:319] 
	I1121 14:29:37.246294  249617 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246443  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:37.246454  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.246462  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.248274  249617 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:36.147516  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.147569  255774 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-376255"
	I1121 14:29:36.147633  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.169609  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.169898  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.169928  255774 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376255 && echo "default-k8s-diff-port-376255" | sudo tee /etc/hostname
	I1121 14:29:36.328958  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.329040  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.353105  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.353414  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.353448  255774 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376255/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:36.504067  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:36.504097  255774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:36.504119  255774 ubuntu.go:190] setting up certificates
	I1121 14:29:36.504133  255774 provision.go:84] configureAuth start
	I1121 14:29:36.504206  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:36.528674  255774 provision.go:143] copyHostCerts
	I1121 14:29:36.528752  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:36.528762  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:36.528840  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:36.528968  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:36.528997  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:36.529043  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:36.529141  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:36.529152  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:36.529188  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:36.529281  255774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376255 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-376255 localhost minikube]
	I1121 14:29:36.617208  255774 provision.go:177] copyRemoteCerts
	I1121 14:29:36.617283  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:36.617345  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.639948  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.749486  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:36.777360  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 14:29:36.804875  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:29:36.830920  255774 provision.go:87] duration metric: took 326.762892ms to configureAuth
	I1121 14:29:36.830953  255774 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:36.831165  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:36.831181  255774 machine.go:97] duration metric: took 3.855604158s to provisionDockerMachine
	I1121 14:29:36.831191  255774 client.go:176] duration metric: took 11.666782197s to LocalClient.Create
	I1121 14:29:36.831216  255774 start.go:167] duration metric: took 11.666902979s to libmachine.API.Create "default-k8s-diff-port-376255"
	I1121 14:29:36.831234  255774 start.go:293] postStartSetup for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:36.831254  255774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:36.831311  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:36.831360  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.855811  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.969760  255774 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:36.974452  255774 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:36.974529  255774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:36.974577  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:36.974658  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:36.974771  255774 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:36.974903  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:36.984975  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:37.017462  255774 start.go:296] duration metric: took 186.210262ms for postStartSetup
	I1121 14:29:37.017947  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.041309  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:37.041659  255774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:37.041731  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.070697  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.177189  255774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:37.185711  255774 start.go:128] duration metric: took 12.024042461s to createHost
	I1121 14:29:37.185741  255774 start.go:83] releasing machines lock for "default-k8s-diff-port-376255", held for 12.024206528s
	I1121 14:29:37.185820  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.211853  255774 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:37.211903  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.211965  255774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:37.212033  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.238575  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.242252  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.421321  255774 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:37.431728  255774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:37.437939  255774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:37.438053  255774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:37.469409  255774 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:37.469437  255774 start.go:496] detecting cgroup driver to use...
	I1121 14:29:37.469471  255774 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:37.469521  255774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:37.490669  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:37.507754  255774 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:37.507821  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:37.525644  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:37.545289  255774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:37.674060  255774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:37.795128  255774 docker.go:234] disabling docker service ...
	I1121 14:29:37.795198  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:37.819043  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:37.834819  255774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:37.960408  255774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:38.072269  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:38.089314  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:38.105248  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:29:38.117445  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:38.128509  255774 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:38.128607  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:38.139526  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.150896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:38.161459  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.173179  255774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:38.183645  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:38.194923  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:38.207896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:38.220346  255774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:38.230823  255774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:38.241807  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.339708  255774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:38.460319  255774 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:38.460387  255774 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:38.465812  255774 start.go:564] Will wait 60s for crictl version
	I1121 14:29:38.465875  255774 ssh_runner.go:195] Run: which crictl
	I1121 14:29:38.470166  255774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:38.507773  255774 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:38.507860  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.532247  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.559098  255774 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1121 14:29:33.655577  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:33.655599  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:33.655612  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.225853  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:36.226247  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:36.226304  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:36.226364  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:36.259583  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:36.259613  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.259619  213058 cri.go:89] found id: ""
	I1121 14:29:36.259628  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:36.259690  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.264798  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.269597  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:36.269663  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:36.304312  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:36.304335  213058 cri.go:89] found id: ""
	I1121 14:29:36.304346  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:36.304403  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.309760  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:36.309833  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:36.342617  213058 cri.go:89] found id: ""
	I1121 14:29:36.342643  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.342653  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:36.342660  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:36.342722  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:36.378880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.378909  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:36.378914  213058 cri.go:89] found id: ""
	I1121 14:29:36.378924  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:36.378996  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.384032  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.388866  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:36.388932  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:36.427253  213058 cri.go:89] found id: ""
	I1121 14:29:36.427282  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.427293  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:36.427300  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:36.427355  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:36.461581  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:36.461604  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:36.461609  213058 cri.go:89] found id: ""
	I1121 14:29:36.461618  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:36.461677  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.466623  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.471422  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:36.471490  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:36.503502  213058 cri.go:89] found id: ""
	I1121 14:29:36.503533  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.503566  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:36.503575  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:36.503633  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:36.538350  213058 cri.go:89] found id: ""
	I1121 14:29:36.538379  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.538390  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:36.538404  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:36.538419  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:36.666987  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:36.667025  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:36.685628  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:36.685659  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:36.763464  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:36.763491  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:36.763508  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.808789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:36.808832  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.887558  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:36.887596  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:36.952391  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:36.952434  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:36.993139  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:36.993167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:37.037499  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:37.037552  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:37.084237  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:37.084270  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:37.132236  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:37.132272  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:37.172720  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:37.172753  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:34.341753  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.474720913s)
	I1121 14:29:34.341781  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:29:34.341812  252125 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:34.341855  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:37.308520  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.966633628s)
	I1121 14:29:37.308585  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:29:37.308616  252125 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.308666  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.772300  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:29:37.772349  252125 cache_images.go:125] Successfully loaded all cached images
	I1121 14:29:37.772358  252125 cache_images.go:94] duration metric: took 13.627858156s to LoadCachedImages
	I1121 14:29:37.772375  252125 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1121 14:29:37.772522  252125 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-921956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:37.772622  252125 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:37.802988  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.803017  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.803041  252125 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:37.803067  252125 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-921956 NodeName:no-preload-921956 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:37.803212  252125 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-921956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:37.803298  252125 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.814189  252125 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:29:37.814255  252125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.824124  252125 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1121 14:29:37.824214  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1121 14:29:37.824231  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:29:37.824217  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1121 14:29:37.829417  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:29:37.829466  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1121 14:29:38.860713  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:29:38.875498  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:29:38.880447  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:29:38.880477  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1121 14:29:39.014274  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:29:39.021151  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:29:39.021187  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1121 14:29:39.234010  252125 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:39.244382  252125 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1121 14:29:39.259897  252125 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:39.279143  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1121 14:29:38.560688  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:38.580956  255774 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:38.585728  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.599140  255774 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:38.599295  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:38.599391  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.631637  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.631660  255774 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:38.631720  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.665498  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.665522  255774 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:38.665530  255774 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1121 14:29:38.665659  255774 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:38.665752  255774 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:38.694106  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:38.694138  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:38.694156  255774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:38.694182  255774 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376255 NodeName:default-k8s-diff-port-376255 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:38.694318  255774 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-376255"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:38.694377  255774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:38.704016  255774 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:38.704074  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:38.712471  255774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1121 14:29:38.726311  255774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:38.743589  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1121 14:29:38.759275  255774 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:38.763723  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.775814  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.870850  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:38.898876  255774 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255 for IP: 192.168.85.2
	I1121 14:29:38.898898  255774 certs.go:195] generating shared ca certs ...
	I1121 14:29:38.898917  255774 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:38.899068  255774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:38.899116  255774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:38.899130  255774 certs.go:257] generating profile certs ...
	I1121 14:29:38.899196  255774 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key
	I1121 14:29:38.899223  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt with IP's: []
	I1121 14:29:39.101636  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt ...
	I1121 14:29:39.101669  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: {Name:mk48f410a390b01d5b10a9357a2648374ae8306b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.101873  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key ...
	I1121 14:29:39.101885  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key: {Name:mkb89c45215e08640f5b5fa9a6de6863ea0983e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.102008  255774 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066
	I1121 14:29:39.102024  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:29:39.438352  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 ...
	I1121 14:29:39.438387  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066: {Name:mkc5f7dc938a9541dec0c2accd850515b39a25d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438574  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 ...
	I1121 14:29:39.438586  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066: {Name:mka67f2d91e35acd02a0ed4174188db6877ef796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438666  255774 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt
	I1121 14:29:39.438744  255774 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key
	I1121 14:29:39.438811  255774 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key
	I1121 14:29:39.438826  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt with IP's: []
	I1121 14:29:39.523793  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt ...
	I1121 14:29:39.523827  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt: {Name:mk2418751bb08ae4f2cae2628ba430b2e731f823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524011  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key ...
	I1121 14:29:39.524031  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key: {Name:mk12031f310020bd38886fd870544563c6ab1faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524255  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:39.524307  255774 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:39.524323  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:39.524353  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:39.524383  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:39.524407  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:39.524445  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:39.525071  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:39.546065  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:39.565880  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:39.585450  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:39.604394  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 14:29:39.623736  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:39.642460  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:39.661463  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:39.681314  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:39.879137  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:39.899730  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:39.918630  255774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:39.935942  255774 ssh_runner.go:195] Run: openssl version
	I1121 14:29:39.943062  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.020861  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026152  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026209  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.067681  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.077051  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.087944  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092369  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092434  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.132125  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.142255  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.152828  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157171  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157265  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.199881  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.210053  255774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.214456  255774 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.214524  255774 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.214625  255774 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.214692  255774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.249359  255774 cri.go:89] found id: ""
	I1121 14:29:40.249429  255774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.259121  255774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.270847  255774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.270910  255774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.283266  255774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.283287  255774 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.283341  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1121 14:29:40.293676  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.293725  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.303277  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.313015  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.313073  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.322086  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.330920  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.331015  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.339376  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.347984  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.348046  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.356683  255774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.404354  255774 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.404455  255774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.435448  255774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.435583  255774 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.435628  255774 kubeadm.go:319] OS: Linux
	I1121 14:29:40.435689  255774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.435827  255774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.435905  255774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.436039  255774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.436108  255774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.436176  255774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.436276  255774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.436351  255774 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.508224  255774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.508370  255774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.508531  255774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.513996  255774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:39.295828  252125 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:39.301164  252125 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:39.312709  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:39.400897  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:39.429294  252125 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956 for IP: 192.168.103.2
	I1121 14:29:39.429315  252125 certs.go:195] generating shared ca certs ...
	I1121 14:29:39.429332  252125 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.429485  252125 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:39.429583  252125 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:39.429600  252125 certs.go:257] generating profile certs ...
	I1121 14:29:39.429678  252125 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key
	I1121 14:29:39.429693  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt with IP's: []
	I1121 14:29:39.556088  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt ...
	I1121 14:29:39.556115  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: {Name:mkc697edce2d4ccb5a4a2ccbe74255aef4a205c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556297  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key ...
	I1121 14:29:39.556312  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key: {Name:mkad7b167b883af61314c3f8b6c71358edc782dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556419  252125 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d
	I1121 14:29:39.556435  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1121 14:29:39.871499  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d ...
	I1121 14:29:39.871529  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d: {Name:mkc839b1c936af809ed1159ef4599336fd260d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871726  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d ...
	I1121 14:29:39.871748  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d: {Name:mkc2f0abcac84f6547f3e0edb165e90b14fdd7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871882  252125 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt
	I1121 14:29:39.871997  252125 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key
	I1121 14:29:39.872096  252125 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key
	I1121 14:29:39.872120  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt with IP's: []
	I1121 14:29:40.083173  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt ...
	I1121 14:29:40.083201  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt: {Name:mkba7efd029f616230e0b3cf14c4f32abac0549e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083385  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key ...
	I1121 14:29:40.083414  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key: {Name:mk24f6fbb57f5dfce4a401be193e0a832a6ccf6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083661  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:40.083700  252125 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:40.083711  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:40.083749  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:40.083780  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:40.083827  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:40.083887  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:40.084653  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:40.106430  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:40.126520  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:40.148412  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:40.169973  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:29:40.191493  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:29:40.214458  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:40.234692  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:29:40.261986  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:40.352437  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:40.372804  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:40.394700  252125 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:40.411183  252125 ssh_runner.go:195] Run: openssl version
	I1121 14:29:40.419607  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.431060  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436371  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436429  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.481320  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.492797  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.502878  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507432  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507499  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.567779  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.577673  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.587826  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592472  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592528  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.627626  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.637464  252125 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.641884  252125 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.641943  252125 kubeadm.go:401] StartCluster: {Name:no-preload-921956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.642030  252125 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.642085  252125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.673351  252125 cri.go:89] found id: ""
	I1121 14:29:40.673423  252125 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.682715  252125 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.691493  252125 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.691581  252125 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.700143  252125 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.700160  252125 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.700205  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:40.708734  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.708799  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.717135  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.726191  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.726262  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.734074  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.742647  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.742709  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.751091  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.759770  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.759841  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.768253  252125 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.810825  252125 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.810892  252125 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.831836  252125 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.831940  252125 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.832026  252125 kubeadm.go:319] OS: Linux
	I1121 14:29:40.832115  252125 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.832212  252125 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.832286  252125 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.832358  252125 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.832432  252125 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.832504  252125 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.832668  252125 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.832735  252125 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.895341  252125 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.895491  252125 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.895637  252125 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.901358  252125 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.249631  249617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:37.262987  249617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:29:37.263020  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:37.283444  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:38.138719  249617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:38.138808  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.138810  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-012258 minikube.k8s.io/updated_at=2025_11_21T14_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-012258 minikube.k8s.io/primary=true
	I1121 14:29:38.150782  249617 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:38.225220  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.726231  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.225533  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.725591  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.225601  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.725734  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:41.226112  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.521190  255774 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.521325  255774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.521431  255774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.003970  255774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.240665  255774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.425685  255774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:41.689428  255774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:41.923373  255774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:41.923563  255774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.051973  255774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.052979  255774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.277531  255774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:42.491572  255774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:42.605458  255774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:42.605535  255774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:42.870659  255774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:43.039072  255774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:43.228611  255774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:43.489903  255774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:43.563271  255774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:43.563948  255774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:43.568453  255774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:39.727688  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:39.728083  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:39.728134  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:39.728197  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:39.758413  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:39.758436  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:39.758441  213058 cri.go:89] found id: ""
	I1121 14:29:39.758452  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:39.758508  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.763439  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.767912  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:39.767980  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:39.802923  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:39.802948  213058 cri.go:89] found id: ""
	I1121 14:29:39.802957  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:39.803013  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.807778  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:39.807853  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:39.835286  213058 cri.go:89] found id: ""
	I1121 14:29:39.835314  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.835335  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:39.835343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:39.835408  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:39.864986  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:39.865034  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:39.865040  213058 cri.go:89] found id: ""
	I1121 14:29:39.865050  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:39.865105  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.869441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.873676  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:39.873739  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:39.902671  213058 cri.go:89] found id: ""
	I1121 14:29:39.902698  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.902707  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:39.902715  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:39.902762  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:39.933452  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:39.933477  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:39.933483  213058 cri.go:89] found id: ""
	I1121 14:29:39.933492  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:39.933557  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.938051  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.942029  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:39.942094  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:39.969991  213058 cri.go:89] found id: ""
	I1121 14:29:39.970018  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.970028  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:39.970036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:39.970086  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:39.997381  213058 cri.go:89] found id: ""
	I1121 14:29:39.997406  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.997417  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:39.997429  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:39.997443  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:40.027188  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:40.027213  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:40.067878  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:40.067906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:40.101358  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:40.101388  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:40.115674  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:40.115704  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:40.153845  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:40.153871  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:40.188913  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:40.188944  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:40.244995  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:40.245033  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:40.351506  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:40.351558  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:40.417221  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:40.417244  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:40.417263  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:40.457789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:40.457836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.520712  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:40.520748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.056648  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:43.057094  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:43.057150  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:43.057204  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:43.085236  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.085260  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.085265  213058 cri.go:89] found id: ""
	I1121 14:29:43.085275  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:43.085333  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.089868  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.094074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:43.094134  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:43.122420  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.122447  213058 cri.go:89] found id: ""
	I1121 14:29:43.122457  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:43.122512  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.126830  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:43.126892  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:43.156518  213058 cri.go:89] found id: ""
	I1121 14:29:43.156566  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.156577  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:43.156584  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:43.156646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:43.185212  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:43.185233  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.185238  213058 cri.go:89] found id: ""
	I1121 14:29:43.185277  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:43.185338  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.190000  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.194074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:43.194131  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:43.224175  213058 cri.go:89] found id: ""
	I1121 14:29:43.224201  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.224211  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:43.224218  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:43.224277  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:43.258260  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:43.258292  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.258299  213058 cri.go:89] found id: ""
	I1121 14:29:43.258310  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:43.258378  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.263276  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.268195  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:43.268264  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:43.303269  213058 cri.go:89] found id: ""
	I1121 14:29:43.303300  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.303311  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:43.303319  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:43.303379  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:43.333956  213058 cri.go:89] found id: ""
	I1121 14:29:43.333985  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.333995  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:43.334007  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:43.334021  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:43.366338  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:43.366369  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:43.458987  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:43.459027  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.497960  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:43.497995  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.539997  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:43.540035  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.575882  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:43.575911  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.903405  252125 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.903502  252125 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.903630  252125 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.180390  252125 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.211121  252125 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.523007  252125 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:42.461521  252125 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:42.641495  252125 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:42.641701  252125 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.773640  252125 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.773843  252125 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.921369  252125 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:43.256203  252125 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:43.834470  252125 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:43.834645  252125 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:43.949422  252125 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:44.093777  252125 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:44.227287  252125 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:44.509482  252125 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:44.696294  252125 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:44.696767  252125 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:44.705846  252125 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:43.573374  255774 out.go:252]   - Booting up control plane ...
	I1121 14:29:43.573510  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:43.573669  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:43.573781  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:43.590344  255774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:43.590494  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:43.599838  255774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:43.600184  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:43.600247  255774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:43.720721  255774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:43.720878  255774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:44.721899  255774 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001196965s
	I1121 14:29:44.724830  255774 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:44.724972  255774 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1121 14:29:44.725131  255774 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:44.725253  255774 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:41.726266  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.225460  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.725727  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.225740  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.725669  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.225350  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.725651  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.226025  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.725289  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:46.226316  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.632243  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:43.632278  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.681909  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:43.681959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.723402  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:43.723454  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:43.776606  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:43.776641  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:43.793171  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:43.793200  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:43.854264  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:43.854293  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:43.854308  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.383659  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:46.384075  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:46.384128  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:46.384191  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:46.441629  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.441734  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:46.441754  213058 cri.go:89] found id: ""
	I1121 14:29:46.441776  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:46.441873  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.447714  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.453337  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:46.453422  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:46.497451  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.497475  213058 cri.go:89] found id: ""
	I1121 14:29:46.497485  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:46.497585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.504731  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:46.504801  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:46.562972  213058 cri.go:89] found id: ""
	I1121 14:29:46.563014  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.563027  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:46.563036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:46.563287  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:46.611186  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:46.611216  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:46.611221  213058 cri.go:89] found id: ""
	I1121 14:29:46.611231  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:46.611289  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.620404  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.626388  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:46.626559  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:46.674192  213058 cri.go:89] found id: ""
	I1121 14:29:46.674247  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.674259  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:46.674267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:46.674448  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:46.749738  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.749765  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:46.749771  213058 cri.go:89] found id: ""
	I1121 14:29:46.749780  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:46.749835  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.756273  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.763986  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:46.764120  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:46.811858  213058 cri.go:89] found id: ""
	I1121 14:29:46.811883  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.811901  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:46.811909  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:46.811963  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:46.849599  213058 cri.go:89] found id: ""
	I1121 14:29:46.849645  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.849655  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:46.849666  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:46.849683  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.913988  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:46.914024  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.953189  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:46.953227  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:47.001663  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:47.001705  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:47.041106  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:47.041137  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:47.107673  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:47.107712  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:47.240432  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:47.240473  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:47.288852  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:47.288894  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:46.531314  255774 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.80645272s
	I1121 14:29:47.509316  255774 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.784421033s
	I1121 14:29:49.226647  255774 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501794549s
	I1121 14:29:49.239409  255774 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:49.252719  255774 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:49.264076  255774 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:49.264371  255774 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-376255 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:49.274799  255774 kubeadm.go:319] [bootstrap-token] Using token: 8nwcfl.9utqukqcvuro6a4p
	I1121 14:29:44.769338  252125 out.go:252]   - Booting up control plane ...
	I1121 14:29:44.769476  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:44.769652  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:44.769771  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:44.769940  252125 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:44.770087  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:44.778391  252125 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:44.779655  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:44.779729  252125 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:44.894196  252125 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:44.894364  252125 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:45.895053  252125 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000974959s
	I1121 14:29:45.898754  252125 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:45.898875  252125 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1121 14:29:45.899003  252125 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:45.899149  252125 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:48.621169  252125 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.722350043s
	I1121 14:29:49.059709  252125 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.160801257s
	I1121 14:29:49.276414  255774 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:49.276590  255774 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:49.280532  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:49.287374  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:49.290401  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:49.293308  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:49.297552  255774 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:49.632747  255774 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:46.726037  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.228665  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.725338  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.226199  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.725959  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.225812  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.725337  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.225293  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.310282  249617 kubeadm.go:1114] duration metric: took 12.17154172s to wait for elevateKubeSystemPrivileges
	I1121 14:29:50.310322  249617 kubeadm.go:403] duration metric: took 23.370802852s to StartCluster
	I1121 14:29:50.310347  249617 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.310438  249617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:50.311864  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.312167  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:50.312169  249617 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:50.312267  249617 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:50.312352  249617 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312372  249617 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-012258"
	I1121 14:29:50.312403  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.312458  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:50.312516  249617 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312530  249617 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-012258"
	I1121 14:29:50.312827  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.312965  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.314603  249617 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:50.316238  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:50.339724  249617 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:50.056893  255774 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:50.634602  255774 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:50.635720  255774 kubeadm.go:319] 
	I1121 14:29:50.635840  255774 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:50.635916  255774 kubeadm.go:319] 
	I1121 14:29:50.636085  255774 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:50.636139  255774 kubeadm.go:319] 
	I1121 14:29:50.636189  255774 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:50.636300  255774 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:50.636386  255774 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:50.636448  255774 kubeadm.go:319] 
	I1121 14:29:50.636574  255774 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:50.636584  255774 kubeadm.go:319] 
	I1121 14:29:50.636647  255774 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:50.636652  255774 kubeadm.go:319] 
	I1121 14:29:50.636709  255774 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:50.636796  255774 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:50.636878  255774 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:50.636886  255774 kubeadm.go:319] 
	I1121 14:29:50.636981  255774 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:50.637083  255774 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:50.637090  255774 kubeadm.go:319] 
	I1121 14:29:50.637247  255774 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637414  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:50.637449  255774 kubeadm.go:319] 	--control-plane 
	I1121 14:29:50.637460  255774 kubeadm.go:319] 
	I1121 14:29:50.637571  255774 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:50.637580  255774 kubeadm.go:319] 
	I1121 14:29:50.637672  255774 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637785  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:50.642202  255774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:50.642513  255774 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:50.642647  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:50.642693  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:50.645524  255774 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:50.339929  249617 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-012258"
	I1121 14:29:50.339977  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.340433  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.341133  249617 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.341154  249617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:50.341208  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.377822  249617 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.377846  249617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:50.377844  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.377907  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.410483  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.415901  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:50.468678  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:50.503643  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.536480  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.667362  249617 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:50.668484  249617 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:29:50.954598  249617 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:50.401999  252125 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502477764s
	I1121 14:29:50.419850  252125 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:50.933016  252125 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:50.948821  252125 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:50.949093  252125 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-921956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:50.961417  252125 kubeadm.go:319] [bootstrap-token] Using token: uhuim0.7wh8hbt7v76eo7qs
	I1121 14:29:50.955828  249617 addons.go:530] duration metric: took 643.55365ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:51.174831  249617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-012258" context rescaled to 1 replicas
	I1121 14:29:50.963415  252125 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:50.963588  252125 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:50.971176  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:50.980644  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:50.985255  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:50.989946  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:50.994015  252125 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:51.128309  252125 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:51.550178  252125 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:52.128624  252125 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:52.129402  252125 kubeadm.go:319] 
	I1121 14:29:52.129496  252125 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:52.129528  252125 kubeadm.go:319] 
	I1121 14:29:52.129657  252125 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:52.129669  252125 kubeadm.go:319] 
	I1121 14:29:52.129705  252125 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:52.129798  252125 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:52.129906  252125 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:52.129923  252125 kubeadm.go:319] 
	I1121 14:29:52.129995  252125 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:52.130004  252125 kubeadm.go:319] 
	I1121 14:29:52.130078  252125 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:52.130087  252125 kubeadm.go:319] 
	I1121 14:29:52.130170  252125 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:52.130304  252125 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:52.130418  252125 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:52.130446  252125 kubeadm.go:319] 
	I1121 14:29:52.130574  252125 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:52.130677  252125 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:52.130685  252125 kubeadm.go:319] 
	I1121 14:29:52.130797  252125 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.130966  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:52.131000  252125 kubeadm.go:319] 	--control-plane 
	I1121 14:29:52.131035  252125 kubeadm.go:319] 
	I1121 14:29:52.131212  252125 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:52.131230  252125 kubeadm.go:319] 
	I1121 14:29:52.131343  252125 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.131485  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:52.132830  252125 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:52.132967  252125 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:52.133003  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:52.133014  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:52.134968  252125 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:52.136241  252125 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:52.141107  252125 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:52.141131  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:52.155585  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:52.395340  252125 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:52.395422  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.395526  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-921956 minikube.k8s.io/updated_at=2025_11_21T14_29_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-921956 minikube.k8s.io/primary=true
	I1121 14:29:52.481012  252125 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:52.481125  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.982198  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.481748  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.981282  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.646815  255774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:50.654615  255774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:50.654642  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:50.673887  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:50.944978  255774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:50.945143  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.945309  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-376255 minikube.k8s.io/updated_at=2025_11_21T14_29_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=default-k8s-diff-port-376255 minikube.k8s.io/primary=true
	I1121 14:29:50.960009  255774 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:51.036596  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:51.537134  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.037345  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.536941  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.037592  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.536966  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.036678  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.536697  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.037499  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.536808  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.610391  255774 kubeadm.go:1114] duration metric: took 4.665295307s to wait for elevateKubeSystemPrivileges
	I1121 14:29:55.610426  255774 kubeadm.go:403] duration metric: took 15.395907943s to StartCluster
	I1121 14:29:55.610448  255774 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.610511  255774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:55.612071  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.612346  255774 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:55.612498  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:55.612612  255774 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:55.612696  255774 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612713  255774 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.612745  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.612775  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:55.612835  255774 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612852  255774 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376255"
	I1121 14:29:55.613218  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613392  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613476  255774 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:55.615420  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:55.641842  255774 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.641893  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.642317  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.647007  255774 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:55.648771  255774 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.648807  255774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:55.648882  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.679690  255774 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.679713  255774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:55.679780  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.680868  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.703091  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.713751  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:55.781953  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:55.795189  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.811872  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.895061  255774 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:55.896386  255774 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:29:56.162438  255774 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:29:52.672645  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:55.172665  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:29:54.481750  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.981303  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.481778  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.981846  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.481336  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.981822  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:57.056720  252125 kubeadm.go:1114] duration metric: took 4.66135199s to wait for elevateKubeSystemPrivileges
	I1121 14:29:57.056760  252125 kubeadm.go:403] duration metric: took 16.414821557s to StartCluster
	I1121 14:29:57.056783  252125 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.056866  252125 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:57.059279  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.059591  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:57.059595  252125 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:57.059668  252125 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:57.059755  252125 addons.go:70] Setting storage-provisioner=true in profile "no-preload-921956"
	I1121 14:29:57.059780  252125 addons.go:239] Setting addon storage-provisioner=true in "no-preload-921956"
	I1121 14:29:57.059783  252125 addons.go:70] Setting default-storageclass=true in profile "no-preload-921956"
	I1121 14:29:57.059810  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.059818  252125 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:57.059810  252125 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-921956"
	I1121 14:29:57.060267  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.060366  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.061615  252125 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:57.063049  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:57.087511  252125 addons.go:239] Setting addon default-storageclass=true in "no-preload-921956"
	I1121 14:29:57.087574  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.088046  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.088842  252125 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:57.090553  252125 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.090577  252125 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:57.090634  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.113518  252125 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.113567  252125 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:57.113644  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.116604  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.140626  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.162241  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:57.221336  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:57.237060  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.259845  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.393470  252125 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:57.394577  252125 node_ready.go:35] waiting up to 6m0s for node "no-preload-921956" to be "Ready" ...
	I1121 14:29:57.623024  252125 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:57.414885  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.125971322s)
	W1121 14:29:57.414929  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1121 14:29:57.414939  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:57.414952  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:57.462838  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:57.462881  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:57.526637  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:57.526671  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:57.574224  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:57.574259  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:57.624430  252125 addons.go:530] duration metric: took 564.759261ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:57.898009  252125 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-921956" context rescaled to 1 replicas
	I1121 14:29:56.163632  255774 addons.go:530] duration metric: took 551.031985ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:56.399602  255774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-376255" context rescaled to 1 replicas
	W1121 14:29:57.899680  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:29:57.174208  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:59.672116  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:00.114035  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1121 14:29:59.398191  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:01.898360  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:29:59.900344  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.900816  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:04.400331  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.672252  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:30:04.171805  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:05.672011  249617 node_ready.go:49] node "old-k8s-version-012258" is "Ready"
	I1121 14:30:05.672046  249617 node_ready.go:38] duration metric: took 15.003519412s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:30:05.672064  249617 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:05.672125  249617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:05.689799  249617 api_server.go:72] duration metric: took 15.377593574s to wait for apiserver process to appear ...
	I1121 14:30:05.689974  249617 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:05.690001  249617 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:30:05.696217  249617 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:30:05.697950  249617 api_server.go:141] control plane version: v1.28.0
	I1121 14:30:05.697978  249617 api_server.go:131] duration metric: took 7.994891ms to wait for apiserver health ...
	I1121 14:30:05.697990  249617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:05.702726  249617 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:05.702769  249617 system_pods.go:61] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.702778  249617 system_pods.go:61] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.702785  249617 system_pods.go:61] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.702796  249617 system_pods.go:61] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.702808  249617 system_pods.go:61] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.702818  249617 system_pods.go:61] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.702822  249617 system_pods.go:61] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.702829  249617 system_pods.go:61] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.702837  249617 system_pods.go:74] duration metric: took 4.84094ms to wait for pod list to return data ...
	I1121 14:30:05.702852  249617 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:05.705127  249617 default_sa.go:45] found service account: "default"
	I1121 14:30:05.705151  249617 default_sa.go:55] duration metric: took 2.290103ms for default service account to be created ...
	I1121 14:30:05.705161  249617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:05.710235  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.710318  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.710330  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.710337  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.710367  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.710374  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.710380  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.710385  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.710404  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.710597  249617 retry.go:31] will retry after 257.065607ms: missing components: kube-dns
	I1121 14:30:05.972608  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.972648  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.972657  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.972665  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.972676  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.972682  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.972687  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.972692  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.972707  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.972726  249617 retry.go:31] will retry after 339.692313ms: missing components: kube-dns
	I1121 14:30:06.317124  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:06.317155  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Running
	I1121 14:30:06.317160  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:06.317163  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:06.317167  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:06.317171  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:06.317175  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:06.317178  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:06.317181  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Running
	I1121 14:30:06.317188  249617 system_pods.go:126] duration metric: took 612.020803ms to wait for k8s-apps to be running ...
	I1121 14:30:06.317194  249617 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:06.317250  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:06.332295  249617 system_svc.go:56] duration metric: took 15.088564ms WaitForService to wait for kubelet
	I1121 14:30:06.332331  249617 kubeadm.go:587] duration metric: took 16.020134285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:06.332357  249617 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:06.338044  249617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:06.338071  249617 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:06.338084  249617 node_conditions.go:105] duration metric: took 5.72136ms to run NodePressure ...
	I1121 14:30:06.338096  249617 start.go:242] waiting for startup goroutines ...
	I1121 14:30:06.338102  249617 start.go:247] waiting for cluster config update ...
	I1121 14:30:06.338113  249617 start.go:256] writing updated cluster config ...
	I1121 14:30:06.338382  249617 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:06.342534  249617 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:06.347323  249617 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.352062  249617 pod_ready.go:94] pod "coredns-5dd5756b68-vst4c" is "Ready"
	I1121 14:30:06.352087  249617 pod_ready.go:86] duration metric: took 4.697932ms for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.354946  249617 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.359326  249617 pod_ready.go:94] pod "etcd-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.359355  249617 pod_ready.go:86] duration metric: took 4.388182ms for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.362007  249617 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.366060  249617 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.366081  249617 pod_ready.go:86] duration metric: took 4.051984ms for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.368789  249617 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.746914  249617 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.746952  249617 pod_ready.go:86] duration metric: took 378.141903ms for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.947790  249617 pod_ready.go:83] waiting for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.347266  249617 pod_ready.go:94] pod "kube-proxy-wsp2w" is "Ready"
	I1121 14:30:07.347291  249617 pod_ready.go:86] duration metric: took 399.477159ms for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.547233  249617 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946728  249617 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-012258" is "Ready"
	I1121 14:30:07.946756  249617 pod_ready.go:86] duration metric: took 399.500525ms for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946772  249617 pod_ready.go:40] duration metric: took 1.604187461s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.009909  249617 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1121 14:30:08.014607  249617 out.go:203] 
	W1121 14:30:08.016075  249617 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:30:08.020782  249617 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:30:08.022622  249617 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-012258" cluster and "default" namespace by default
	I1121 14:30:05.115052  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1121 14:30:05.115115  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:05.115188  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:05.143819  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.143839  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.143843  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:05.143846  213058 cri.go:89] found id: ""
	I1121 14:30:05.143853  213058 logs.go:282] 3 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:05.143912  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.148585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.152984  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.156944  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:05.157004  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:05.185404  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.185430  213058 cri.go:89] found id: ""
	I1121 14:30:05.185440  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:05.185498  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.190360  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:05.190432  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:05.222964  213058 cri.go:89] found id: ""
	I1121 14:30:05.222989  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.222999  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:05.223006  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:05.223058  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:05.254414  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:05.254436  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:05.254440  213058 cri.go:89] found id: ""
	I1121 14:30:05.254447  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:05.254505  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.258766  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.262456  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:05.262524  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:05.288454  213058 cri.go:89] found id: ""
	I1121 14:30:05.288486  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.288496  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:05.288505  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:05.288598  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:05.317814  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:05.317841  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:05.317847  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.317851  213058 cri.go:89] found id: ""
	I1121 14:30:05.317861  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:05.317930  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.322506  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.326684  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.330828  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:05.330957  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:05.360073  213058 cri.go:89] found id: ""
	I1121 14:30:05.360098  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.360107  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:05.360116  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:05.360171  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:05.388524  213058 cri.go:89] found id: ""
	I1121 14:30:05.388561  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.388573  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:05.388587  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:05.388602  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.427247  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:05.427279  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:05.517583  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:05.517615  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.556205  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:30:05.556238  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.601637  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:05.601692  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.642125  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:05.642167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:05.707252  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:05.707295  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:05.747947  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:05.747990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:05.767646  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:05.767678  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:04.398534  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.897181  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:08.897492  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.900285  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	I1121 14:30:07.400113  255774 node_ready.go:49] node "default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:07.400148  255774 node_ready.go:38] duration metric: took 11.503726167s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:30:07.400166  255774 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:07.400227  255774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:07.416428  255774 api_server.go:72] duration metric: took 11.804040955s to wait for apiserver process to appear ...
	I1121 14:30:07.416462  255774 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:07.416487  255774 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 14:30:07.423355  255774 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 14:30:07.424441  255774 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:07.424471  255774 api_server.go:131] duration metric: took 8.001103ms to wait for apiserver health ...
	I1121 14:30:07.424480  255774 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:07.428816  255774 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:07.428856  255774 system_pods.go:61] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.428866  255774 system_pods.go:61] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.428874  255774 system_pods.go:61] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.428880  255774 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.428886  255774 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.428891  255774 system_pods.go:61] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.428899  255774 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.428912  255774 system_pods.go:61] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.428921  255774 system_pods.go:74] duration metric: took 4.433771ms to wait for pod list to return data ...
	I1121 14:30:07.428932  255774 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:07.431771  255774 default_sa.go:45] found service account: "default"
	I1121 14:30:07.431794  255774 default_sa.go:55] duration metric: took 2.856811ms for default service account to be created ...
	I1121 14:30:07.431804  255774 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:07.435787  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.435816  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.435821  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.435826  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.435830  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.435833  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.435836  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.435841  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.435846  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.435871  255774 retry.go:31] will retry after 217.060579ms: missing components: kube-dns
	I1121 14:30:07.656900  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.656930  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.656937  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.656945  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.656950  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.656955  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.656959  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.656964  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.656970  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.656989  255774 retry.go:31] will retry after 330.648304ms: missing components: kube-dns
	I1121 14:30:07.995514  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.995612  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.995626  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.995636  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.995642  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.995653  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.995659  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.995664  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.995683  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.995713  255774 retry.go:31] will retry after 466.383408ms: missing components: kube-dns
	I1121 14:30:08.466385  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:08.466414  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Running
	I1121 14:30:08.466419  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:08.466423  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:08.466427  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:08.466430  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:08.466435  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:08.466438  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:08.466441  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Running
	I1121 14:30:08.466448  255774 system_pods.go:126] duration metric: took 1.034639333s to wait for k8s-apps to be running ...
	I1121 14:30:08.466454  255774 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:08.466495  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:08.480058  255774 system_svc.go:56] duration metric: took 13.59071ms WaitForService to wait for kubelet
	I1121 14:30:08.480087  255774 kubeadm.go:587] duration metric: took 12.867708638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:08.480104  255774 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:08.483054  255774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:08.483077  255774 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:08.483089  255774 node_conditions.go:105] duration metric: took 2.980591ms to run NodePressure ...
	I1121 14:30:08.483101  255774 start.go:242] waiting for startup goroutines ...
	I1121 14:30:08.483107  255774 start.go:247] waiting for cluster config update ...
	I1121 14:30:08.483116  255774 start.go:256] writing updated cluster config ...
	I1121 14:30:08.483378  255774 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:08.487457  255774 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.490869  255774 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.495613  255774 pod_ready.go:94] pod "coredns-66bc5c9577-fr27b" is "Ready"
	I1121 14:30:08.495638  255774 pod_ready.go:86] duration metric: took 4.745112ms for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.498070  255774 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.502098  255774 pod_ready.go:94] pod "etcd-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.502122  255774 pod_ready.go:86] duration metric: took 4.029361ms for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.504276  255774 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.508229  255774 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.508250  255774 pod_ready.go:86] duration metric: took 3.957821ms for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.510387  255774 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.891344  255774 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.891369  255774 pod_ready.go:86] duration metric: took 380.959206ms for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.091636  255774 pod_ready.go:83] waiting for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.492078  255774 pod_ready.go:94] pod "kube-proxy-hdplf" is "Ready"
	I1121 14:30:09.492108  255774 pod_ready.go:86] duration metric: took 400.444722ms for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.693278  255774 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092105  255774 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:10.092133  255774 pod_ready.go:86] duration metric: took 398.824976ms for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092146  255774 pod_ready.go:40] duration metric: took 1.604655578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:10.138628  255774 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:10.140593  255774 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-376255" cluster and "default" namespace by default
	I1121 14:30:08.754284  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.986586875s)
	W1121 14:30:08.754342  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1121 14:30:08.754352  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:08.754366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:08.789119  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:08.789149  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:08.842933  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:08.842974  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:08.880878  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:08.880919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:08.910920  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:08.910953  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.440020  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:11.440496  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:11.440556  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:11.440601  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:11.472645  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:11.472669  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:11.472674  213058 cri.go:89] found id: ""
	I1121 14:30:11.472683  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:11.472748  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.478061  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.482946  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:11.483034  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:11.517693  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:11.517722  213058 cri.go:89] found id: ""
	I1121 14:30:11.517732  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:11.517797  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.523621  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:11.523699  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:11.559155  213058 cri.go:89] found id: ""
	I1121 14:30:11.559194  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.559204  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:11.559212  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:11.559271  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:11.595093  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.595127  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:11.595133  213058 cri.go:89] found id: ""
	I1121 14:30:11.595143  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:11.595194  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.600085  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.604973  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:11.605048  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:11.639606  213058 cri.go:89] found id: ""
	I1121 14:30:11.639636  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.639647  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:11.639653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:11.639713  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:11.684373  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.684400  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.684405  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.684410  213058 cri.go:89] found id: ""
	I1121 14:30:11.684421  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:11.684482  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.689732  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.695253  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.701315  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:11.701388  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:11.732802  213058 cri.go:89] found id: ""
	I1121 14:30:11.732831  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.732841  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:11.732848  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:11.732907  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:11.761686  213058 cri.go:89] found id: ""
	I1121 14:30:11.761717  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.761729  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:11.761741  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:11.761756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.816634  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:11.816670  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.846024  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:11.846055  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.876932  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:11.876964  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.912984  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:11.913018  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:11.965381  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:11.965423  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:11.997477  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:11.997509  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:12.011497  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:12.011524  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:12.071024  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:12.071049  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:12.071065  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:12.106865  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:12.106898  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:12.141245  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:12.141276  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:12.176551  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:12.176600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:12.268742  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:12.268780  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	W1121 14:30:10.897620  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	I1121 14:30:11.398100  252125 node_ready.go:49] node "no-preload-921956" is "Ready"
	I1121 14:30:11.398128  252125 node_ready.go:38] duration metric: took 14.003530083s for node "no-preload-921956" to be "Ready" ...
	I1121 14:30:11.398142  252125 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:11.398195  252125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:11.412043  252125 api_server.go:72] duration metric: took 14.35241025s to wait for apiserver process to appear ...
	I1121 14:30:11.412070  252125 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:11.412087  252125 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:30:11.417254  252125 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:30:11.418517  252125 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:11.418570  252125 api_server.go:131] duration metric: took 6.492303ms to wait for apiserver health ...
	I1121 14:30:11.418581  252125 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:11.421927  252125 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:11.422024  252125 system_pods.go:61] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.422034  252125 system_pods.go:61] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.422047  252125 system_pods.go:61] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.422059  252125 system_pods.go:61] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.422069  252125 system_pods.go:61] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.422073  252125 system_pods.go:61] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.422077  252125 system_pods.go:61] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.422082  252125 system_pods.go:61] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.422094  252125 system_pods.go:74] duration metric: took 3.505153ms to wait for pod list to return data ...
	I1121 14:30:11.422109  252125 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:11.424685  252125 default_sa.go:45] found service account: "default"
	I1121 14:30:11.424710  252125 default_sa.go:55] duration metric: took 2.591611ms for default service account to be created ...
	I1121 14:30:11.424722  252125 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:11.427627  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.427680  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.427689  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.427703  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.427713  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.427721  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.427726  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.427731  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.427737  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.427768  252125 retry.go:31] will retry after 234.428318ms: missing components: kube-dns
	I1121 14:30:11.669788  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.669831  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.669840  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.669850  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.669858  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.669865  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.669871  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.669877  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.669893  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.669919  252125 retry.go:31] will retry after 250.085803ms: missing components: kube-dns
	I1121 14:30:11.924517  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.924602  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.924614  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.924627  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.924633  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.924642  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.924647  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.924653  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.924661  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.924682  252125 retry.go:31] will retry after 441.862758ms: missing components: kube-dns
	I1121 14:30:12.371065  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.371110  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:12.371122  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.371131  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.371136  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.371142  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.371147  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.371158  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.371170  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:12.371189  252125 retry.go:31] will retry after 502.578888ms: missing components: kube-dns
	I1121 14:30:12.879209  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.879243  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Running
	I1121 14:30:12.879249  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.879253  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.879258  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.879268  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.879271  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.879275  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.879278  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Running
	I1121 14:30:12.879289  252125 system_pods.go:126] duration metric: took 1.454561179s to wait for k8s-apps to be running ...
	I1121 14:30:12.879301  252125 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:12.879351  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:12.894061  252125 system_svc.go:56] duration metric: took 14.74714ms WaitForService to wait for kubelet
	I1121 14:30:12.894092  252125 kubeadm.go:587] duration metric: took 15.834465857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:12.894115  252125 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:12.897599  252125 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:12.897630  252125 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:12.897641  252125 node_conditions.go:105] duration metric: took 3.520753ms to run NodePressure ...
	I1121 14:30:12.897652  252125 start.go:242] waiting for startup goroutines ...
	I1121 14:30:12.897659  252125 start.go:247] waiting for cluster config update ...
	I1121 14:30:12.897669  252125 start.go:256] writing updated cluster config ...
	I1121 14:30:12.897983  252125 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:12.902897  252125 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:12.906562  252125 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.912263  252125 pod_ready.go:94] pod "coredns-66bc5c9577-s4rzb" is "Ready"
	I1121 14:30:12.912286  252125 pod_ready.go:86] duration metric: took 5.702456ms for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.915190  252125 pod_ready.go:83] waiting for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.919870  252125 pod_ready.go:94] pod "etcd-no-preload-921956" is "Ready"
	I1121 14:30:12.919896  252125 pod_ready.go:86] duration metric: took 4.68423ms for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.921926  252125 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.925984  252125 pod_ready.go:94] pod "kube-apiserver-no-preload-921956" is "Ready"
	I1121 14:30:12.926012  252125 pod_ready.go:86] duration metric: took 4.065762ms for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.928283  252125 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.307608  252125 pod_ready.go:94] pod "kube-controller-manager-no-preload-921956" is "Ready"
	I1121 14:30:13.307639  252125 pod_ready.go:86] duration metric: took 379.335151ms for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.508229  252125 pod_ready.go:83] waiting for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.907070  252125 pod_ready.go:94] pod "kube-proxy-wmx7z" is "Ready"
	I1121 14:30:13.907101  252125 pod_ready.go:86] duration metric: took 398.843128ms for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.108040  252125 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507264  252125 pod_ready.go:94] pod "kube-scheduler-no-preload-921956" is "Ready"
	I1121 14:30:14.507293  252125 pod_ready.go:86] duration metric: took 399.219492ms for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507307  252125 pod_ready.go:40] duration metric: took 1.604362709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:14.554506  252125 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:14.556366  252125 out.go:179] * Done! kubectl is now configured to use "no-preload-921956" cluster and "default" namespace by default
	I1121 14:30:14.802507  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:14.803048  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:14.803100  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:14.803156  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:14.832438  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:14.832464  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:14.832469  213058 cri.go:89] found id: ""
	I1121 14:30:14.832479  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:14.832560  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.836869  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.840970  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:14.841027  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:14.869276  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:14.869297  213058 cri.go:89] found id: ""
	I1121 14:30:14.869306  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:14.869364  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.873530  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:14.873616  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:14.902293  213058 cri.go:89] found id: ""
	I1121 14:30:14.902325  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.902336  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:14.902343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:14.902396  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:14.931422  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:14.931444  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:14.931448  213058 cri.go:89] found id: ""
	I1121 14:30:14.931455  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:14.931507  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.936188  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.940673  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:14.940742  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:14.969277  213058 cri.go:89] found id: ""
	I1121 14:30:14.969308  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.969320  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:14.969328  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:14.969386  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:14.999162  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:14.999190  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:14.999195  213058 cri.go:89] found id: ""
	I1121 14:30:14.999209  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:14.999275  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.003627  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.008044  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:15.008149  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:15.036025  213058 cri.go:89] found id: ""
	I1121 14:30:15.036050  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.036061  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:15.036069  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:15.036123  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:15.064814  213058 cri.go:89] found id: ""
	I1121 14:30:15.064840  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.064851  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:15.064863  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:15.064877  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:15.105369  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:15.105412  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:15.145479  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:15.145521  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:15.186460  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:15.186498  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:15.233156  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:15.233196  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:15.328776  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:15.328824  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:15.343510  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:15.343556  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:15.375919  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:15.375959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:15.412267  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:15.412310  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:15.467388  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:15.467422  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:15.495400  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:15.495451  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:15.527880  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:15.527906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:15.589380  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.090626  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:18.091055  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:18.091106  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:18.091154  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:18.119750  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.119777  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:18.119781  213058 cri.go:89] found id: ""
	I1121 14:30:18.119788  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:18.119846  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.124441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.128481  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:18.128574  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:18.155968  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:18.155990  213058 cri.go:89] found id: ""
	I1121 14:30:18.156000  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:18.156056  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.160457  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:18.160529  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:18.191869  213058 cri.go:89] found id: ""
	I1121 14:30:18.191899  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.191909  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:18.191916  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:18.191990  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:18.222614  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:18.222639  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.222644  213058 cri.go:89] found id: ""
	I1121 14:30:18.222653  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:18.222710  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.227248  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.231976  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:18.232054  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:18.261651  213058 cri.go:89] found id: ""
	I1121 14:30:18.261686  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.261696  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:18.261703  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:18.261756  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:18.293248  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:18.293277  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:18.293283  213058 cri.go:89] found id: ""
	I1121 14:30:18.293291  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:18.293360  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.297988  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.302375  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:18.302444  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:18.331900  213058 cri.go:89] found id: ""
	I1121 14:30:18.331976  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.331989  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:18.331997  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:18.332053  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:18.362314  213058 cri.go:89] found id: ""
	I1121 14:30:18.362341  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.362351  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:18.362363  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:18.362378  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.401362  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:18.401403  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:18.453554  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:18.453597  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:18.470719  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:18.470750  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:18.535220  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.535241  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:18.535255  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.572460  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:18.572490  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	db852415ef1dc       56cc512116c8f       9 seconds ago       Running             busybox                   0                   e54fe86273872       busybox                                                default
	503bfdf03cf92       52546a367cc9e       15 seconds ago      Running             coredns                   0                   90307d29a5634       coredns-66bc5c9577-fr27b                               kube-system
	72566b31204f1       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   c822919b946f5       storage-provisioner                                    kube-system
	5ae0b8683c837       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   c07d35ce51347       kindnet-cdzd4                                          kube-system
	482b9bb196494       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   793cf2292079a       kube-proxy-hdplf                                       kube-system
	d4b4acbfed098       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   5996271748c58       kube-controller-manager-default-k8s-diff-port-376255   kube-system
	0167abb93fad5       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   5677c92bba15d       etcd-default-k8s-diff-port-376255                      kube-system
	049e7f927287c       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   e6e8ff5f9a760       kube-scheduler-default-k8s-diff-port-376255            kube-system
	d3f63cf7e2378       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   8dfe7b46f28da       kube-apiserver-default-k8s-diff-port-376255            kube-system
	
	
	==> containerd <==
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.417258305Z" level=info msg="CreateContainer within sandbox \"c822919b946f5084228dedf9bcff448780d4c1d0f9bb88544bec381ec181e4b4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83\""
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.418098732Z" level=info msg="StartContainer for \"72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83\""
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.419052809Z" level=info msg="connecting to shim 72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83" address="unix:///run/containerd/s/ea06ee1969c69f41a158dafd695d145ae6a2522a693ddcad561ea53000bcae67" protocol=ttrpc version=3
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.422837003Z" level=info msg="Container 503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.432317405Z" level=info msg="CreateContainer within sandbox \"90307d29a563415a13a6efc9e6611bdfa8459eab6a4193ce269e2c075d2e77c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786\""
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.433885617Z" level=info msg="StartContainer for \"503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786\""
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.435002526Z" level=info msg="connecting to shim 503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786" address="unix:///run/containerd/s/e4364fe920f744c3ba1c981b59ff648e4b672f006c8b2ce6a982c700c058a032" protocol=ttrpc version=3
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.475458464Z" level=info msg="StartContainer for \"72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83\" returns successfully"
	Nov 21 14:30:07 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:07.483417325Z" level=info msg="StartContainer for \"503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786\" returns successfully"
	Nov 21 14:30:10 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:10.625630528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e6d82a47-2d60-4b9a-8e47-37d867b92b64,Namespace:default,Attempt:0,}"
	Nov 21 14:30:10 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:10.665247296Z" level=info msg="connecting to shim e54fe862738726b4a20f4534960ca579dd1eebd8f039b9e8eb7a64ec18185c30" address="unix:///run/containerd/s/7683fa59a762f604c3ba440e18606922538b08060da5f003cc83fc10a8b41128" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:30:10 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:10.735678986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e6d82a47-2d60-4b9a-8e47-37d867b92b64,Namespace:default,Attempt:0,} returns sandbox id \"e54fe862738726b4a20f4534960ca579dd1eebd8f039b9e8eb7a64ec18185c30\""
	Nov 21 14:30:10 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:10.737883752Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.002632475Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.003520464Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.004959871Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.007088408Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.007589904Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.269662828s"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.007636702Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.012526795Z" level=info msg="CreateContainer within sandbox \"e54fe862738726b4a20f4534960ca579dd1eebd8f039b9e8eb7a64ec18185c30\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.020535489Z" level=info msg="Container db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.029170044Z" level=info msg="CreateContainer within sandbox \"e54fe862738726b4a20f4534960ca579dd1eebd8f039b9e8eb7a64ec18185c30\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75\""
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.029914495Z" level=info msg="StartContainer for \"db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75\""
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.030935590Z" level=info msg="connecting to shim db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75" address="unix:///run/containerd/s/7683fa59a762f604c3ba440e18606922538b08060da5f003cc83fc10a8b41128" protocol=ttrpc version=3
	Nov 21 14:30:13 default-k8s-diff-port-376255 containerd[660]: time="2025-11-21T14:30:13.089294399Z" level=info msg="StartContainer for \"db852415ef1dcbf853ef93f70d23ccd5ec94be8704c247fe952702868e9a6a75\" returns successfully"
	
	
	==> coredns [503bfdf03cf92076c47a1396f31d08fee2bfe4b847e852055e31dd2cb1208786] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34200 - 39323 "HINFO IN 5503388865233133299.8183971682332353198. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.096214955s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-376255
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-376255
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-376255
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_29_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:29:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-376255
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:30:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:30:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-376255
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                36196da5-e221-443f-ae48-9567a40a96a8
	  Boot ID:                    f900700b-0668-4d24-87ff-85e15fbda365
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-fr27b                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-default-k8s-diff-port-376255                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-cdzd4                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-376255             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-376255    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-hdplf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-376255             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 39s)  kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node default-k8s-diff-port-376255 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node default-k8s-diff-port-376255 event: Registered Node default-k8s-diff-port-376255 in Controller
	  Normal  NodeReady                17s                kubelet          Node default-k8s-diff-port-376255 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001887] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.440508] i8042: Warning: Keylock active
	[  +0.011202] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526419] block sda: the capability attribute has been deprecated.
	[  +0.095215] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027093] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.485024] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [0167abb93fad5a96138057402ea72b2bbbac6460847560456f81c3e61a226b4f] <==
	{"level":"warn","ts":"2025-11-21T14:29:46.391480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.401325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.445535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.457263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.467962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.479670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.491145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.500690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.511596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.529759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.541910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.553246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.567893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.576669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.586761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.597014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.607480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.619647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.628355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.648906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.658906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.678319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.689570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.702820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:46.796116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55500","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:30:23 up  1:12,  0 user,  load average: 4.09, 3.08, 1.94
	Linux default-k8s-diff-port-376255 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ae0b8683c8370d5c74a38ec1a8996128b935a4e574cd9f20d9213a154813db9] <==
	I1121 14:29:56.575119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:29:56.575390       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:29:56.575585       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:29:56.575602       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:29:56.575621       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:29:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:29:56.873269       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:29:56.873300       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:29:56.873314       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:29:56.873899       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:29:57.174789       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:29:57.174824       1 metrics.go:72] Registering metrics
	I1121 14:29:57.174874       1 controller.go:711] "Syncing nftables rules"
	I1121 14:30:06.876368       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:30:06.876471       1 main.go:301] handling current node
	I1121 14:30:16.874272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:30:16.874308       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d3f63cf7e2378b1cd63984e31c6b646308b750ea8cc070ff57b3cee65a92c4db] <==
	I1121 14:29:47.489457       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:29:47.489469       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:29:47.489484       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:29:47.489491       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:29:47.494595       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:29:47.503871       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:47.524079       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:29:48.386698       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:29:48.390984       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:29:48.391006       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:29:49.045280       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:29:49.087410       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:29:49.187749       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:29:49.193901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:29:49.195067       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:29:49.199941       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:29:49.402735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:29:50.042816       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:29:50.055664       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:29:50.067332       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:29:54.607808       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:54.612744       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:55.106555       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:29:55.455001       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1121 14:30:19.423999       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:53806: use of closed network connection
	
	
	==> kube-controller-manager [d4b4acbfed0989aceacf5589cec62c91cea975b67f5a3ae6feb60ef411e8095e] <==
	I1121 14:29:54.400580       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:29:54.401694       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:29:54.401750       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:29:54.402151       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:29:54.402224       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:29:54.402283       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:29:54.402677       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:29:54.402691       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:29:54.403205       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:29:54.402823       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:29:54.403570       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:29:54.404715       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:29:54.408012       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:29:54.410310       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:29:54.410378       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:29:54.420705       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:29:54.420790       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:29:54.420955       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:29:54.420969       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:29:54.420987       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:29:54.428224       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:29:54.430662       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:29:54.431597       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-376255" podCIDRs=["10.244.0.0/24"]
	I1121 14:29:54.434636       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:30:09.349487       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [482b9bb19649402137bebb046dcd7e73f5411dcc7697d3a5b2a9fffd9e7ccf16] <==
	I1121 14:29:56.087038       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:29:56.172292       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:29:56.272394       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:29:56.272432       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:29:56.272615       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:29:56.297614       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:29:56.297678       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:29:56.303209       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:29:56.303624       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:29:56.303656       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:29:56.304850       1 config.go:200] "Starting service config controller"
	I1121 14:29:56.304884       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:29:56.304887       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:29:56.304923       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:29:56.304926       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:29:56.304945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:29:56.304970       1 config.go:309] "Starting node config controller"
	I1121 14:29:56.304976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:29:56.405272       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:29:56.405315       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:29:56.405323       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:29:56.405341       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [049e7f927287c0eda41eb968ee81714a27b377f233379aa501e22da2bc6fb72e] <==
	E1121 14:29:47.505907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:29:47.505987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:29:47.506074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:29:47.506135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:29:47.506342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:29:47.506415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:29:47.506461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:29:47.506574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:29:47.507660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:29:47.508397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:29:47.510084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:29:48.328160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:29:48.332628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:29:48.426401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:29:48.428709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:29:48.532292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:29:48.552036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:29:48.554522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:29:48.702784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:29:48.717485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:29:48.770253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 14:29:48.790559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:29:48.798155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:29:48.802861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1121 14:29:50.698431       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:29:50 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:50.945498    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-376255" podStartSLOduration=1.945482685 podStartE2EDuration="1.945482685s" podCreationTimestamp="2025-11-21 14:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:50.945013076 +0000 UTC m=+1.135610121" watchObservedRunningTime="2025-11-21 14:29:50.945482685 +0000 UTC m=+1.136079730"
	Nov 21 14:29:50 default-k8s-diff-port-376255 kubelet[1434]: E1121 14:29:50.953777    1434 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-376255\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-376255"
	Nov 21 14:29:50 default-k8s-diff-port-376255 kubelet[1434]: E1121 14:29:50.954123    1434 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-376255\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-376255"
	Nov 21 14:29:50 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:50.964987    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-376255" podStartSLOduration=0.96496409 podStartE2EDuration="964.96409ms" podCreationTimestamp="2025-11-21 14:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:50.963604357 +0000 UTC m=+1.154201384" watchObservedRunningTime="2025-11-21 14:29:50.96496409 +0000 UTC m=+1.155561135"
	Nov 21 14:29:54 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:54.478889    1434 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:29:54 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:54.479768    1434 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542417    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9xw\" (UniqueName: \"kubernetes.io/projected/f4b8f54c-361f-4748-9f31-92ffb753f404-kube-api-access-fz9xw\") pod \"kube-proxy-hdplf\" (UID: \"f4b8f54c-361f-4748-9f31-92ffb753f404\") " pod="kube-system/kube-proxy-hdplf"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542480    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f954f962-f79a-49e5-8b79-5fbd3c544ffc-cni-cfg\") pod \"kindnet-cdzd4\" (UID: \"f954f962-f79a-49e5-8b79-5fbd3c544ffc\") " pod="kube-system/kindnet-cdzd4"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542509    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f954f962-f79a-49e5-8b79-5fbd3c544ffc-lib-modules\") pod \"kindnet-cdzd4\" (UID: \"f954f962-f79a-49e5-8b79-5fbd3c544ffc\") " pod="kube-system/kindnet-cdzd4"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542534    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qlx6\" (UniqueName: \"kubernetes.io/projected/f954f962-f79a-49e5-8b79-5fbd3c544ffc-kube-api-access-5qlx6\") pod \"kindnet-cdzd4\" (UID: \"f954f962-f79a-49e5-8b79-5fbd3c544ffc\") " pod="kube-system/kindnet-cdzd4"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542593    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b8f54c-361f-4748-9f31-92ffb753f404-xtables-lock\") pod \"kube-proxy-hdplf\" (UID: \"f4b8f54c-361f-4748-9f31-92ffb753f404\") " pod="kube-system/kube-proxy-hdplf"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542609    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f954f962-f79a-49e5-8b79-5fbd3c544ffc-xtables-lock\") pod \"kindnet-cdzd4\" (UID: \"f954f962-f79a-49e5-8b79-5fbd3c544ffc\") " pod="kube-system/kindnet-cdzd4"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542628    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4b8f54c-361f-4748-9f31-92ffb753f404-kube-proxy\") pod \"kube-proxy-hdplf\" (UID: \"f4b8f54c-361f-4748-9f31-92ffb753f404\") " pod="kube-system/kube-proxy-hdplf"
	Nov 21 14:29:55 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:55.542652    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b8f54c-361f-4748-9f31-92ffb753f404-lib-modules\") pod \"kube-proxy-hdplf\" (UID: \"f4b8f54c-361f-4748-9f31-92ffb753f404\") " pod="kube-system/kube-proxy-hdplf"
	Nov 21 14:29:56 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:56.980508    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hdplf" podStartSLOduration=1.980488013 podStartE2EDuration="1.980488013s" podCreationTimestamp="2025-11-21 14:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:56.980288351 +0000 UTC m=+7.170885396" watchObservedRunningTime="2025-11-21 14:29:56.980488013 +0000 UTC m=+7.171085057"
	Nov 21 14:29:56 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:29:56.980681    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cdzd4" podStartSLOduration=1.980672067 podStartE2EDuration="1.980672067s" podCreationTimestamp="2025-11-21 14:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:56.968815157 +0000 UTC m=+7.159412203" watchObservedRunningTime="2025-11-21 14:29:56.980672067 +0000 UTC m=+7.171269111"
	Nov 21 14:30:06 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:06.960724    1434 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:30:07 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:07.025858    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aecd7b98-657f-464e-9860-d060714bbc5d-config-volume\") pod \"coredns-66bc5c9577-fr27b\" (UID: \"aecd7b98-657f-464e-9860-d060714bbc5d\") " pod="kube-system/coredns-66bc5c9577-fr27b"
	Nov 21 14:30:07 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:07.025901    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxkj\" (UniqueName: \"kubernetes.io/projected/4fa1d228-0310-45d2-87b6-91ce085f1f58-kube-api-access-hnxkj\") pod \"storage-provisioner\" (UID: \"4fa1d228-0310-45d2-87b6-91ce085f1f58\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:07 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:07.025941    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlfl\" (UniqueName: \"kubernetes.io/projected/aecd7b98-657f-464e-9860-d060714bbc5d-kube-api-access-2wlfl\") pod \"coredns-66bc5c9577-fr27b\" (UID: \"aecd7b98-657f-464e-9860-d060714bbc5d\") " pod="kube-system/coredns-66bc5c9577-fr27b"
	Nov 21 14:30:07 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:07.025973    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4fa1d228-0310-45d2-87b6-91ce085f1f58-tmp\") pod \"storage-provisioner\" (UID: \"4fa1d228-0310-45d2-87b6-91ce085f1f58\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:08 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:08.024337    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.024313781 podStartE2EDuration="12.024313781s" podCreationTimestamp="2025-11-21 14:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:08.024165656 +0000 UTC m=+18.214762699" watchObservedRunningTime="2025-11-21 14:30:08.024313781 +0000 UTC m=+18.214910826"
	Nov 21 14:30:08 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:08.024505    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fr27b" podStartSLOduration=13.02449524 podStartE2EDuration="13.02449524s" podCreationTimestamp="2025-11-21 14:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:08.01196088 +0000 UTC m=+18.202557939" watchObservedRunningTime="2025-11-21 14:30:08.02449524 +0000 UTC m=+18.215092285"
	Nov 21 14:30:10 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:10.350273    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5c7\" (UniqueName: \"kubernetes.io/projected/e6d82a47-2d60-4b9a-8e47-37d867b92b64-kube-api-access-zt5c7\") pod \"busybox\" (UID: \"e6d82a47-2d60-4b9a-8e47-37d867b92b64\") " pod="default/busybox"
	Nov 21 14:30:14 default-k8s-diff-port-376255 kubelet[1434]: I1121 14:30:14.014319    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.74306566 podStartE2EDuration="4.014298186s" podCreationTimestamp="2025-11-21 14:30:10 +0000 UTC" firstStartedPulling="2025-11-21 14:30:10.737415699 +0000 UTC m=+20.928012736" lastFinishedPulling="2025-11-21 14:30:13.008648225 +0000 UTC m=+23.199245262" observedRunningTime="2025-11-21 14:30:14.014088039 +0000 UTC m=+24.204685088" watchObservedRunningTime="2025-11-21 14:30:14.014298186 +0000 UTC m=+24.204895230"
	
	
	==> storage-provisioner [72566b31204f17c69820e87dc138f52467a4fe88b660933bd2d6fbab49f14b83] <==
	I1121 14:30:07.485611       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:30:07.494496       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:30:07.494563       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:30:07.496836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:07.502215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:30:07.502370       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:30:07.502572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-376255_01e4b301-4ab2-4e88-90be-8213872d2096!
	I1121 14:30:07.503060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c8a28cf-d14c-42de-b72a-faa3b4f36feb", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-376255_01e4b301-4ab2-4e88-90be-8213872d2096 became leader
	W1121 14:30:07.510786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:07.514054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:30:07.603340       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-376255_01e4b301-4ab2-4e88-90be-8213872d2096!
	W1121 14:30:09.517386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:09.523088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:11.528136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:11.533792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:13.537345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:13.541698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:15.545533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:15.550374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:17.554459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:17.560214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:19.563656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:19.568461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:21.572335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:21.577956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-376255 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-921956 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [73c5bb38-ca7b-4848-93a8-0622f9c1292e] Pending
helpers_test.go:352: "busybox" [73c5bb38-ca7b-4848-93a8-0622f9c1292e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [73c5bb38-ca7b-4848-93a8-0622f9c1292e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00437806s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-921956 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-921956
helpers_test.go:243: (dbg) docker inspect no-preload-921956:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643",
	        "Created": "2025-11-21T14:29:20.340927235Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:29:20.385308254Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643/hosts",
	        "LogPath": "/var/lib/docker/containers/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643-json.log",
	        "Name": "/no-preload-921956",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-921956:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-921956",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643",
	                "LowerDir": "/var/lib/docker/overlay2/5405febd5abf836dbb465ba59f30da4381ba6c183a6e8927bdc55a96aceaaf63-init/diff:/var/lib/docker/overlay2/a649757dd9587fa5a20ca8a56ec1923099f2a5e912dc7e8e1dfa08e79248b59f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5405febd5abf836dbb465ba59f30da4381ba6c183a6e8927bdc55a96aceaaf63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5405febd5abf836dbb465ba59f30da4381ba6c183a6e8927bdc55a96aceaaf63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5405febd5abf836dbb465ba59f30da4381ba6c183a6e8927bdc55a96aceaaf63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-921956",
	                "Source": "/var/lib/docker/volumes/no-preload-921956/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-921956",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-921956",
	                "name.minikube.sigs.k8s.io": "no-preload-921956",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "202c56918b451e57ac6a6940b6773054760fbb30c422daf31ff01b1753b6ebd3",
	            "SandboxKey": "/var/run/docker/netns/202c56918b45",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-921956": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6269051e29ec1521c06cedb27527bf727867cfc36d1dc7699629b8110ce83ce3",
	                    "EndpointID": "9d6544cccf2a2df07942c882c6a2c4ef55c6ecebe3af4be8d2e234f681a411b9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fa:b0:e0:f4:ee:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-921956",
	                        "2f8cf80dc583"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-921956 -n no-preload-921956
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-921956 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-921956 logs -n 25: (1.339908152s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-459127 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo containerd config dump                                                                                                                                                                                                        │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cert-expiration-371956                                                                                                                                                                                                                           │ cert-expiration-371956       │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ -p cilium-459127 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo crio config                                                                                                                                                                                                                   │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cilium-459127                                                                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ start   │ -p cert-options-733993 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p force-systemd-flag-730471 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p NoKubernetes-187733 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │                     │
	│ delete  │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ cert-options-733993 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p cert-options-733993 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p cert-options-733993                                                                                                                                                                                                                              │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ force-systemd-flag-730471 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p force-systemd-flag-730471                                                                                                                                                                                                                        │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-012258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:30 UTC │ 21 Nov 25 14:30 UTC │
	│ stop    │ -p old-k8s-version-012258 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:29:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:29:24.877938  255774 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:29:24.878133  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.878179  255774 out.go:374] Setting ErrFile to fd 2...
	I1121 14:29:24.878200  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.879901  255774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:29:24.881344  255774 out.go:368] Setting JSON to false
	I1121 14:29:24.883254  255774 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4307,"bootTime":1763731058,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:29:24.883372  255774 start.go:143] virtualization: kvm guest
	I1121 14:29:24.885483  255774 out.go:179] * [default-k8s-diff-port-376255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:29:24.887201  255774 notify.go:221] Checking for updates...
	I1121 14:29:24.887242  255774 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:29:24.890729  255774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:29:24.892963  255774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:24.894677  255774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:29:24.897870  255774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:29:24.899765  255774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:29:24.902854  255774 config.go:182] Loaded profile config "kubernetes-upgrade-797080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903030  255774 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903162  255774 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:24.903312  255774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:29:24.939143  255774 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:29:24.939248  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.025144  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.01035373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.025295  255774 docker.go:319] overlay module found
	I1121 14:29:25.027378  255774 out.go:179] * Using the docker driver based on user configuration
	I1121 14:29:22.611340  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.611365  249617 ubuntu.go:182] provisioning hostname "old-k8s-version-012258"
	I1121 14:29:22.611426  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.635589  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.635869  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.635891  249617 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-012258 && echo "old-k8s-version-012258" | sudo tee /etc/hostname
	I1121 14:29:22.796661  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.796754  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.822578  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.822834  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.822860  249617 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-012258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-012258/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-012258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:22.970644  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:22.970676  249617 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:22.970732  249617 ubuntu.go:190] setting up certificates
	I1121 14:29:22.970743  249617 provision.go:84] configureAuth start
	I1121 14:29:22.970826  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:22.991118  249617 provision.go:143] copyHostCerts
	I1121 14:29:22.991183  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:22.991193  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:22.991250  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:22.991367  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:22.991381  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:22.991414  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:22.991488  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:22.991499  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:22.991526  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:22.991627  249617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-012258 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-012258]
	I1121 14:29:23.140756  249617 provision.go:177] copyRemoteCerts
	I1121 14:29:23.140833  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:23.140885  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.161751  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.269718  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:23.292619  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:29:23.314336  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:29:23.337086  249617 provision.go:87] duration metric: took 366.309314ms to configureAuth
	I1121 14:29:23.337129  249617 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:23.337306  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:23.337320  249617 machine.go:97] duration metric: took 3.89496072s to provisionDockerMachine
	I1121 14:29:23.337326  249617 client.go:176] duration metric: took 11.527957207s to LocalClient.Create
	I1121 14:29:23.337344  249617 start.go:167] duration metric: took 11.528071392s to libmachine.API.Create "old-k8s-version-012258"
	I1121 14:29:23.337352  249617 start.go:293] postStartSetup for "old-k8s-version-012258" (driver="docker")
	I1121 14:29:23.337365  249617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:23.337422  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:23.337471  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.359217  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.466089  249617 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:23.470146  249617 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:23.470174  249617 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:23.470185  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:23.470249  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:23.470349  249617 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:23.470480  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:23.479086  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:23.506776  249617 start.go:296] duration metric: took 169.402964ms for postStartSetup
	I1121 14:29:23.507166  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.527044  249617 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/config.json ...
	I1121 14:29:23.527374  249617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:23.527425  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.546669  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.645314  249617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:23.650498  249617 start.go:128] duration metric: took 11.844529266s to createHost
	I1121 14:29:23.650523  249617 start.go:83] releasing machines lock for "old-k8s-version-012258", held for 11.844683904s
	I1121 14:29:23.650592  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.671161  249617 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:23.671227  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.671321  249617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:23.671403  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.694189  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.694196  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.856609  249617 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:23.863273  249617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:23.867917  249617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:23.867991  249617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:23.895679  249617 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:23.895707  249617 start.go:496] detecting cgroup driver to use...
	I1121 14:29:23.895742  249617 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:23.895805  249617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:23.911897  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:23.925350  249617 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:23.925400  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:23.943424  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:23.962675  249617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:24.059689  249617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:24.169263  249617 docker.go:234] disabling docker service ...
	I1121 14:29:24.169325  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:24.191949  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:24.206181  249617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:24.319402  249617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:24.455060  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:24.472888  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:24.497138  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1121 14:29:24.524424  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:24.536491  249617 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:24.536702  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:24.547193  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.559919  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:24.571627  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.581977  249617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:24.629839  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:24.640310  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:24.650595  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:24.660801  249617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:24.669493  249617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:24.677810  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:24.781513  249617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:24.929576  249617 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:24.929707  249617 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:24.936782  249617 start.go:564] Will wait 60s for crictl version
	I1121 14:29:24.936893  249617 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.942453  249617 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:24.986447  249617 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:24.986527  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.018021  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.051308  249617 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1121 14:29:25.029036  255774 start.go:309] selected driver: docker
	I1121 14:29:25.029056  255774 start.go:930] validating driver "docker" against <nil>
	I1121 14:29:25.029071  255774 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:29:25.029977  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.123370  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.11156096 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.123696  255774 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:29:25.124078  255774 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:29:25.125758  255774 out.go:179] * Using Docker driver with root privileges
	I1121 14:29:25.127166  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.127249  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.127262  255774 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:29:25.127353  255774 start.go:353] cluster config:
	{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:25.129454  255774 out.go:179] * Starting "default-k8s-diff-port-376255" primary control-plane node in "default-k8s-diff-port-376255" cluster
	I1121 14:29:25.130961  255774 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:29:25.132637  255774 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:29:25.134190  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:25.134237  255774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 14:29:25.134251  255774 cache.go:65] Caching tarball of preloaded images
	I1121 14:29:25.134262  255774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:29:25.134379  255774 preload.go:238] Found /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1121 14:29:25.134391  255774 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:29:25.134520  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:25.134560  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json: {Name:mk1db0ba6952ac549a7eae06783e73916a7ad392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.161339  255774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:29:25.161363  255774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:29:25.161384  255774 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:29:25.161419  255774 start.go:360] acquireMachinesLock for default-k8s-diff-port-376255: {Name:mka18b3ecaec4bae205bc7951f90400738bef300 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:29:25.161518  255774 start.go:364] duration metric: took 79.824µs to acquireMachinesLock for "default-k8s-diff-port-376255"
	I1121 14:29:25.161561  255774 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:25.161653  255774 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:29:25.055066  249617 cli_runner.go:164] Run: docker network inspect old-k8s-version-012258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.085953  249617 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:25.093859  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.111432  249617 kubeadm.go:884] updating cluster {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:25.111671  249617 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:29:25.111753  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.143860  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.143888  249617 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:25.143953  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.174770  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.174789  249617 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:25.174797  249617 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1121 14:29:25.174897  249617 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-012258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:25.174970  249617 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:25.211311  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.211341  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.211371  249617 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:25.211401  249617 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-012258 NodeName:old-k8s-version-012258 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:25.211596  249617 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-012258"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:25.211673  249617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:29:25.224124  249617 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:25.224202  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:25.235430  249617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1121 14:29:25.254181  249617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:25.283842  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1121 14:29:25.302971  249617 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:25.309092  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.325170  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:25.438037  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:25.469767  249617 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258 for IP: 192.168.94.2
	I1121 14:29:25.469790  249617 certs.go:195] generating shared ca certs ...
	I1121 14:29:25.469811  249617 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.470023  249617 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:25.470095  249617 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:25.470105  249617 certs.go:257] generating profile certs ...
	I1121 14:29:25.470177  249617 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key
	I1121 14:29:25.470199  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt with IP's: []
	I1121 14:29:25.634340  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt ...
	I1121 14:29:25.634374  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt: {Name:mk5e1a3132436dad740351857d527e3c45fff4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648586  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key ...
	I1121 14:29:25.648625  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key: {Name:mk757010d91a13b26eb1340def496546bee9bf26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648791  249617 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc
	I1121 14:29:25.648816  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1121 14:29:25.817862  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc ...
	I1121 14:29:25.817892  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc: {Name:mk8a482343e99af6e8bdd7e52a6e5b813685beb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818099  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc ...
	I1121 14:29:25.818121  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc: {Name:mk4cf761e884b2a77e105e39ad6b0495b59b5aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818237  249617 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt
	I1121 14:29:25.818331  249617 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key
	I1121 14:29:25.818390  249617 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key
	I1121 14:29:25.818406  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt with IP's: []
	I1121 14:29:26.390351  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt ...
	I1121 14:29:26.390391  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt: {Name:mk37207f300780275f6aa5331fc436d60739196c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390599  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key ...
	I1121 14:29:26.390617  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key: {Name:mkff5d416178c38a50235608b783c3957bee8456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390849  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:26.390898  249617 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:26.390913  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:26.390946  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:26.390988  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:26.391029  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:26.391086  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:26.391817  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:26.418450  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:26.446063  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:26.469197  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:26.493823  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:29:26.526847  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:26.555176  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:25.915600  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:25.916118  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:25.916177  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:25.916228  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:25.948057  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:25.948080  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:25.948087  213058 cri.go:89] found id: ""
	I1121 14:29:25.948096  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:25.948160  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.952634  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.956801  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:25.956870  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:25.990988  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:25.991014  213058 cri.go:89] found id: ""
	I1121 14:29:25.991024  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:25.991083  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.995665  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:25.995736  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:26.031577  213058 cri.go:89] found id: ""
	I1121 14:29:26.031604  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.031612  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:26.031618  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:26.031665  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:26.064880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.064907  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.064912  213058 cri.go:89] found id: ""
	I1121 14:29:26.064922  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:26.064979  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.070274  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.075659  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:26.075731  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:26.108079  213058 cri.go:89] found id: ""
	I1121 14:29:26.108108  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.108118  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:26.108125  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:26.108181  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:26.138988  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:26.139018  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.139024  213058 cri.go:89] found id: ""
	I1121 14:29:26.139034  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:26.139096  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.143487  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.147564  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:26.147631  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:26.185747  213058 cri.go:89] found id: ""
	I1121 14:29:26.185774  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.185785  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:26.185793  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:26.185848  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:26.220265  213058 cri.go:89] found id: ""
	I1121 14:29:26.220296  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.220308  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:26.220321  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:26.220335  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.265042  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:26.265072  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:26.402636  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:26.402672  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:26.484531  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:26.484565  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:26.484581  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:26.534239  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:26.534294  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:26.579971  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:26.580016  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:26.643693  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:26.643727  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:26.683712  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:26.683748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:26.702800  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:26.702836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:26.741813  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:26.741845  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.812944  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:26.812997  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.855307  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:26.855347  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:24.308535  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1121 14:29:24.308619  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.317176  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1121 14:29:24.317245  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.318774  252125 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1121 14:29:24.318825  252125 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.318867  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.328208  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1121 14:29:24.328249  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1121 14:29:24.328291  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.328305  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.328664  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1121 14:29:24.328708  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1121 14:29:24.335839  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1121 14:29:24.335900  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.337631  252125 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1121 14:29:24.337672  252125 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.337713  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.346363  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.346443  252125 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1121 14:29:24.346484  252125 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.346517  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361284  252125 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1121 14:29:24.361331  252125 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.361375  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361424  252125 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1121 14:29:24.361445  252125 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.361477  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.366787  252125 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1121 14:29:24.366831  252125 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:29:24.366871  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379457  252125 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1121 14:29:24.379503  252125 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.379558  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379677  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.388569  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.388608  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.388658  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.388681  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.388574  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.418705  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.418763  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.427350  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.434639  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.434777  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.437430  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.437452  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.477986  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.478027  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.478099  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:29:24.478334  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:24.478136  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.485019  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.485026  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.489362  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.521124  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.521651  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:29:24.521767  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:24.553384  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:29:24.553425  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1121 14:29:24.553522  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:29:24.553632  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:24.553699  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1121 14:29:24.553755  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.553769  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:29:24.553803  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:29:24.553853  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:24.553860  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:24.553893  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:29:24.553920  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:29:24.553945  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:24.553945  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1121 14:29:24.565027  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:29:24.565077  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1121 14:29:24.565153  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:29:24.565169  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1121 14:29:24.574297  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:29:24.574338  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1121 14:29:24.574363  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:29:24.574390  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:29:24.574393  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1121 14:29:24.574407  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1121 14:29:24.784169  252125 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.784246  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.964305  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:29:25.029557  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.029626  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.445459  252125 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1121 14:29:25.445578  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691152  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.661495413s)
	I1121 14:29:26.691188  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:29:26.691209  252125 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691206  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.245604103s)
	I1121 14:29:26.691250  252125 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1121 14:29:26.691264  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691297  252125 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691347  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.696141  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.404441617s)
	I1121 14:29:28.100696  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.409327822s)
	I1121 14:29:28.100767  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:29:28.100803  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.100853  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.132780  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:25.163849  255774 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:29:25.164318  255774 start.go:159] libmachine.API.Create for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:25.164395  255774 client.go:173] LocalClient.Create starting
	I1121 14:29:25.164513  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem
	I1121 14:29:25.164575  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164605  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.164704  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem
	I1121 14:29:25.164760  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164776  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.165330  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:29:25.188513  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:29:25.188614  255774 network_create.go:284] running [docker network inspect default-k8s-diff-port-376255] to gather additional debugging logs...
	I1121 14:29:25.188640  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255
	W1121 14:29:25.213297  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 returned with exit code 1
	I1121 14:29:25.213338  255774 network_create.go:287] error running [docker network inspect default-k8s-diff-port-376255]: docker network inspect default-k8s-diff-port-376255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-376255 not found
	I1121 14:29:25.213435  255774 network_create.go:289] output of [docker network inspect default-k8s-diff-port-376255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-376255 not found
	
	** /stderr **
	I1121 14:29:25.213589  255774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.240844  255774 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
	I1121 14:29:25.241874  255774 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39921db0d513 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:e4:85:98:a5:e3} reservation:<nil>}
	I1121 14:29:25.242975  255774 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-36a8741c90a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:21:99:72:63:4a} reservation:<nil>}
	I1121 14:29:25.244042  255774 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-63d543fc8bbd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:58:40:d2:33:c4} reservation:<nil>}
	I1121 14:29:25.245269  255774 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb46e0}
	I1121 14:29:25.245303  255774 network_create.go:124] attempt to create docker network default-k8s-diff-port-376255 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:29:25.245384  255774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 default-k8s-diff-port-376255
	I1121 14:29:25.322210  255774 network_create.go:108] docker network default-k8s-diff-port-376255 192.168.85.0/24 created
	I1121 14:29:25.322244  255774 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-376255" container
	I1121 14:29:25.322309  255774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:29:25.346732  255774 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-376255 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:29:25.374919  255774 oci.go:103] Successfully created a docker volume default-k8s-diff-port-376255
	I1121 14:29:25.374994  255774 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-376255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --entrypoint /usr/bin/test -v default-k8s-diff-port-376255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:29:26.343288  255774 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-376255
	I1121 14:29:26.343370  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:26.343387  255774 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:29:26.343457  255774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:29:26.582319  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:26.606403  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:26.635408  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:26.661287  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:26.686582  249617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:26.703157  249617 ssh_runner.go:195] Run: openssl version
	I1121 14:29:26.712353  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:26.725593  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732381  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732523  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.774823  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:26.785127  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:26.796035  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800685  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800751  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.842185  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:26.852632  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:26.863838  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869571  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869642  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.922017  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:26.934065  249617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:26.939457  249617 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:26.939526  249617 kubeadm.go:401] StartCluster: {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:26.939648  249617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:26.939710  249617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:26.978114  249617 cri.go:89] found id: ""
	I1121 14:29:26.978192  249617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:26.989363  249617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:27.000529  249617 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:27.000603  249617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:27.012158  249617 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:27.012179  249617 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:27.012231  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:27.022084  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:27.022141  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:27.034139  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:27.044897  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:27.045038  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:27.056593  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.066532  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:27.066615  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.077925  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:27.088254  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:27.088320  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:27.098442  249617 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:27.205509  249617 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:27.290009  249617 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:29.388121  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:29.388594  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:29.388645  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:29.388690  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:29.416964  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:29.416991  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.416996  213058 cri.go:89] found id: ""
	I1121 14:29:29.417006  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:29.417074  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.421476  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.425483  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:29.425557  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:29.453687  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:29.453708  213058 cri.go:89] found id: ""
	I1121 14:29:29.453718  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:29.453783  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.458267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:29.458353  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:29.485804  213058 cri.go:89] found id: ""
	I1121 14:29:29.485865  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.485876  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:29.485883  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:29.485940  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:29.514265  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:29.514290  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.514294  213058 cri.go:89] found id: ""
	I1121 14:29:29.514302  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:29.514349  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.518626  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.522446  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:29.522501  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:29.549770  213058 cri.go:89] found id: ""
	I1121 14:29:29.549799  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.549811  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:29.549819  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:29.549868  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:29.577193  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.577217  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.577222  213058 cri.go:89] found id: ""
	I1121 14:29:29.577230  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:29.577288  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.581256  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.585291  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:29.585347  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:29.614632  213058 cri.go:89] found id: ""
	I1121 14:29:29.614664  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.614674  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:29.614682  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:29.614740  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:29.645697  213058 cri.go:89] found id: ""
	I1121 14:29:29.645721  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.645730  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:29.645741  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:29.645756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.675578  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:29.675607  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:29.718952  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:29.718990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:29.750089  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:29.750117  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:29.858708  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:29.858738  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.902976  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:29.903013  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.938083  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:29.938118  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.976329  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:29.976366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:29.991448  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:29.991485  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:30.053990  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:30.054015  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:30.054032  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:30.089042  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:30.089076  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:30.124498  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:30.124528  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.685601  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:32.686035  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:32.686089  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:32.686144  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:32.744948  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:32.745095  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:32.745132  213058 cri.go:89] found id: ""
	I1121 14:29:32.745169  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:32.745355  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.752020  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.760837  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:32.761106  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:32.807418  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:32.807451  213058 cri.go:89] found id: ""
	I1121 14:29:32.807462  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:32.807521  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.813216  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:32.813289  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:32.852598  213058 cri.go:89] found id: ""
	I1121 14:29:32.852633  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.852645  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:32.852653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:32.852711  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:32.889120  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.889144  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:32.889148  213058 cri.go:89] found id: ""
	I1121 14:29:32.889157  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:32.889211  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.894834  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.900572  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:32.900646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:32.937810  213058 cri.go:89] found id: ""
	I1121 14:29:32.937836  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.937846  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:32.937853  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:32.937914  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:32.975713  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:32.975735  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:32.975741  213058 cri.go:89] found id: ""
	I1121 14:29:32.975751  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:32.975815  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.981574  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.985965  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:32.986030  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:33.019894  213058 cri.go:89] found id: ""
	I1121 14:29:33.019923  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.019935  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:33.019949  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:33.020009  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:33.051872  213058 cri.go:89] found id: ""
	I1121 14:29:33.051901  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.051911  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:33.051923  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:33.051937  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:33.103114  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:33.103153  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:33.142816  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:33.142846  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:33.209677  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:33.209736  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:33.255185  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:33.255220  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:33.272562  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:33.272600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:33.319098  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:33.319132  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:33.366245  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:33.366286  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:33.410624  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:33.410660  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:33.458217  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:33.458253  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:33.586879  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:33.586919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:29.835800  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.734910291s)
	I1121 14:29:29.835838  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:29:29.835860  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835902  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835802  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.702989246s)
	I1121 14:29:29.835965  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:29:29.836056  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:29.840842  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:29:29.840873  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1121 14:29:32.866902  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (3.030968163s)
	I1121 14:29:32.866941  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:29:32.866961  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:32.867002  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:31.901829  255774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.558304176s)
	I1121 14:29:31.901864  255774 kic.go:203] duration metric: took 5.558473353s to extract preloaded images to volume ...
	W1121 14:29:31.901941  255774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:29:31.901969  255774 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:29:31.902010  255774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:29:31.985847  255774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-376255 --name default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --network default-k8s-diff-port-376255 --ip 192.168.85.2 --volume default-k8s-diff-port-376255:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:29:32.403824  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Running}}
	I1121 14:29:32.427802  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.456228  255774 cli_runner.go:164] Run: docker exec default-k8s-diff-port-376255 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:29:32.514766  255774 oci.go:144] the created container "default-k8s-diff-port-376255" has a running status.
	I1121 14:29:32.514799  255774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa...
	I1121 14:29:32.829505  255774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:29:32.861911  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.888316  255774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:29:32.888342  255774 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-376255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:29:32.948121  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.975355  255774 machine.go:94] provisionDockerMachine start ...
	I1121 14:29:32.975799  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:33.002463  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:33.002813  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:33.002834  255774 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:29:33.003677  255774 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37682->127.0.0.1:33070: read: connection reset by peer
	I1121 14:29:37.228254  249617 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:29:37.228434  249617 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:37.228644  249617 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:37.228822  249617 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:37.228907  249617 kubeadm.go:319] OS: Linux
	I1121 14:29:37.228971  249617 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:37.229029  249617 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:37.229111  249617 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:37.229198  249617 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:37.229264  249617 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:37.229333  249617 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:37.229403  249617 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:37.229468  249617 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:37.229624  249617 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:37.229762  249617 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:37.229892  249617 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:29:37.230051  249617 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.235113  249617 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:37.235306  249617 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:37.235508  249617 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:37.235691  249617 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:37.235858  249617 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:37.236102  249617 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:37.236205  249617 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:37.236303  249617 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:37.236516  249617 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236607  249617 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:37.236765  249617 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236861  249617 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:37.236954  249617 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:37.237021  249617 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:37.237104  249617 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:37.237178  249617 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:37.237257  249617 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:37.237352  249617 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:37.237438  249617 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:37.237554  249617 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:37.237649  249617 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:37.239227  249617 out.go:252]   - Booting up control plane ...
	I1121 14:29:37.239369  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:37.239534  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:37.239682  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:37.239829  249617 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:37.239965  249617 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:37.240022  249617 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:37.240260  249617 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:29:37.240373  249617 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.503152 seconds
	I1121 14:29:37.240759  249617 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:37.240933  249617 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:37.241035  249617 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:37.241286  249617 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-012258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:37.241409  249617 kubeadm.go:319] [bootstrap-token] Using token: yix385.n0xejrlt7sdx1ngs
	I1121 14:29:37.243198  249617 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:37.243379  249617 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:37.243497  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:37.243755  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:37.243946  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:37.244147  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:37.244287  249617 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:37.244477  249617 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:37.244564  249617 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:37.244632  249617 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:37.244642  249617 kubeadm.go:319] 
	I1121 14:29:37.244725  249617 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:37.244736  249617 kubeadm.go:319] 
	I1121 14:29:37.244834  249617 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:37.244845  249617 kubeadm.go:319] 
	I1121 14:29:37.244877  249617 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:37.244966  249617 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:37.245033  249617 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:37.245045  249617 kubeadm.go:319] 
	I1121 14:29:37.245111  249617 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:37.245120  249617 kubeadm.go:319] 
	I1121 14:29:37.245178  249617 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:37.245192  249617 kubeadm.go:319] 
	I1121 14:29:37.245274  249617 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:37.245371  249617 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:37.245468  249617 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:37.245476  249617 kubeadm.go:319] 
	I1121 14:29:37.245604  249617 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:37.245734  249617 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:37.245755  249617 kubeadm.go:319] 
	I1121 14:29:37.245866  249617 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246024  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:37.246062  249617 kubeadm.go:319] 	--control-plane 
	I1121 14:29:37.246072  249617 kubeadm.go:319] 
	I1121 14:29:37.246178  249617 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:37.246189  249617 kubeadm.go:319] 
	I1121 14:29:37.246294  249617 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246443  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:37.246454  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.246462  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.248274  249617 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:36.147516  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.147569  255774 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-376255"
	I1121 14:29:36.147633  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.169609  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.169898  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.169928  255774 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376255 && echo "default-k8s-diff-port-376255" | sudo tee /etc/hostname
	I1121 14:29:36.328958  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.329040  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.353105  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.353414  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.353448  255774 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376255/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:36.504067  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:36.504097  255774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:36.504119  255774 ubuntu.go:190] setting up certificates
	I1121 14:29:36.504133  255774 provision.go:84] configureAuth start
	I1121 14:29:36.504206  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:36.528674  255774 provision.go:143] copyHostCerts
	I1121 14:29:36.528752  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:36.528762  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:36.528840  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:36.528968  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:36.528997  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:36.529043  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:36.529141  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:36.529152  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:36.529188  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:36.529281  255774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376255 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-376255 localhost minikube]
	I1121 14:29:36.617208  255774 provision.go:177] copyRemoteCerts
	I1121 14:29:36.617283  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:36.617345  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.639948  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.749486  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:36.777360  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 14:29:36.804875  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:29:36.830920  255774 provision.go:87] duration metric: took 326.762892ms to configureAuth
	I1121 14:29:36.830953  255774 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:36.831165  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:36.831181  255774 machine.go:97] duration metric: took 3.855604158s to provisionDockerMachine
	I1121 14:29:36.831191  255774 client.go:176] duration metric: took 11.666782197s to LocalClient.Create
	I1121 14:29:36.831216  255774 start.go:167] duration metric: took 11.666902979s to libmachine.API.Create "default-k8s-diff-port-376255"
	I1121 14:29:36.831234  255774 start.go:293] postStartSetup for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:36.831254  255774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:36.831311  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:36.831360  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.855811  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.969760  255774 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:36.974452  255774 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:36.974529  255774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:36.974577  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:36.974658  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:36.974771  255774 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:36.974903  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:36.984975  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:37.017462  255774 start.go:296] duration metric: took 186.210262ms for postStartSetup
	I1121 14:29:37.017947  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.041309  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:37.041659  255774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:37.041731  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.070697  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.177189  255774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:37.185711  255774 start.go:128] duration metric: took 12.024042461s to createHost
	I1121 14:29:37.185741  255774 start.go:83] releasing machines lock for "default-k8s-diff-port-376255", held for 12.024206528s
	I1121 14:29:37.185820  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.211853  255774 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:37.211903  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.211965  255774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:37.212033  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.238575  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.242252  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.421321  255774 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:37.431728  255774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:37.437939  255774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:37.438053  255774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:37.469409  255774 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:37.469437  255774 start.go:496] detecting cgroup driver to use...
	I1121 14:29:37.469471  255774 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:37.469521  255774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:37.490669  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:37.507754  255774 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:37.507821  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:37.525644  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:37.545289  255774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:37.674060  255774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:37.795128  255774 docker.go:234] disabling docker service ...
	I1121 14:29:37.795198  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:37.819043  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:37.834819  255774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:37.960408  255774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:38.072269  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:38.089314  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:38.105248  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:29:38.117445  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:38.128509  255774 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:38.128607  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:38.139526  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.150896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:38.161459  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.173179  255774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:38.183645  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:38.194923  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:38.207896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:38.220346  255774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:38.230823  255774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:38.241807  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.339708  255774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:38.460319  255774 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:38.460387  255774 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:38.465812  255774 start.go:564] Will wait 60s for crictl version
	I1121 14:29:38.465875  255774 ssh_runner.go:195] Run: which crictl
	I1121 14:29:38.470166  255774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:38.507773  255774 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:38.507860  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.532247  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.559098  255774 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1121 14:29:33.655577  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:33.655599  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:33.655612  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.225853  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:36.226247  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:36.226304  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:36.226364  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:36.259583  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:36.259613  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.259619  213058 cri.go:89] found id: ""
	I1121 14:29:36.259628  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:36.259690  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.264798  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.269597  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:36.269663  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:36.304312  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:36.304335  213058 cri.go:89] found id: ""
	I1121 14:29:36.304346  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:36.304403  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.309760  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:36.309833  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:36.342617  213058 cri.go:89] found id: ""
	I1121 14:29:36.342643  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.342653  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:36.342660  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:36.342722  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:36.378880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.378909  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:36.378914  213058 cri.go:89] found id: ""
	I1121 14:29:36.378924  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:36.378996  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.384032  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.388866  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:36.388932  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:36.427253  213058 cri.go:89] found id: ""
	I1121 14:29:36.427282  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.427293  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:36.427300  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:36.427355  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:36.461581  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:36.461604  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:36.461609  213058 cri.go:89] found id: ""
	I1121 14:29:36.461618  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:36.461677  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.466623  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.471422  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:36.471490  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:36.503502  213058 cri.go:89] found id: ""
	I1121 14:29:36.503533  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.503566  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:36.503575  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:36.503633  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:36.538350  213058 cri.go:89] found id: ""
	I1121 14:29:36.538379  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.538390  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:36.538404  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:36.538419  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:36.666987  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:36.667025  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:36.685628  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:36.685659  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:36.763464  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:36.763491  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:36.763508  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.808789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:36.808832  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.887558  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:36.887596  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:36.952391  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:36.952434  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:36.993139  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:36.993167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:37.037499  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:37.037552  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:37.084237  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:37.084270  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:37.132236  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:37.132272  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:37.172720  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:37.172753  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:34.341753  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.474720913s)
	I1121 14:29:34.341781  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:29:34.341812  252125 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:34.341855  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:37.308520  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.966633628s)
	I1121 14:29:37.308585  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:29:37.308616  252125 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.308666  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.772300  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:29:37.772349  252125 cache_images.go:125] Successfully loaded all cached images
	I1121 14:29:37.772358  252125 cache_images.go:94] duration metric: took 13.627858156s to LoadCachedImages
	I1121 14:29:37.772375  252125 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1121 14:29:37.772522  252125 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-921956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:37.772622  252125 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:37.802988  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.803017  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.803041  252125 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:37.803067  252125 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-921956 NodeName:no-preload-921956 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:37.803212  252125 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-921956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:37.803298  252125 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.814189  252125 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:29:37.814255  252125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.824124  252125 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1121 14:29:37.824214  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1121 14:29:37.824231  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:29:37.824217  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1121 14:29:37.829417  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:29:37.829466  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1121 14:29:38.860713  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:29:38.875498  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:29:38.880447  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:29:38.880477  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1121 14:29:39.014274  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:29:39.021151  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:29:39.021187  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1121 14:29:39.234010  252125 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:39.244382  252125 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1121 14:29:39.259897  252125 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:39.279143  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1121 14:29:38.560688  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:38.580956  255774 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:38.585728  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.599140  255774 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:38.599295  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:38.599391  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.631637  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.631660  255774 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:38.631720  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.665498  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.665522  255774 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:38.665530  255774 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1121 14:29:38.665659  255774 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:38.665752  255774 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:38.694106  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:38.694138  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:38.694156  255774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:38.694182  255774 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376255 NodeName:default-k8s-diff-port-376255 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:38.694318  255774 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-376255"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:38.694377  255774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:38.704016  255774 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:38.704074  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:38.712471  255774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1121 14:29:38.726311  255774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:38.743589  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1121 14:29:38.759275  255774 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:38.763723  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.775814  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.870850  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:38.898876  255774 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255 for IP: 192.168.85.2
	I1121 14:29:38.898898  255774 certs.go:195] generating shared ca certs ...
	I1121 14:29:38.898917  255774 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:38.899068  255774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:38.899116  255774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:38.899130  255774 certs.go:257] generating profile certs ...
	I1121 14:29:38.899196  255774 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key
	I1121 14:29:38.899223  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt with IP's: []
	I1121 14:29:39.101636  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt ...
	I1121 14:29:39.101669  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: {Name:mk48f410a390b01d5b10a9357a2648374ae8306b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.101873  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key ...
	I1121 14:29:39.101885  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key: {Name:mkb89c45215e08640f5b5fa9a6de6863ea0983e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.102008  255774 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066
	I1121 14:29:39.102024  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:29:39.438352  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 ...
	I1121 14:29:39.438387  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066: {Name:mkc5f7dc938a9541dec0c2accd850515b39a25d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438574  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 ...
	I1121 14:29:39.438586  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066: {Name:mka67f2d91e35acd02a0ed4174188db6877ef796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438666  255774 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt
	I1121 14:29:39.438744  255774 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key
	I1121 14:29:39.438811  255774 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key
	I1121 14:29:39.438826  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt with IP's: []
	I1121 14:29:39.523793  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt ...
	I1121 14:29:39.523827  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt: {Name:mk2418751bb08ae4f2cae2628ba430b2e731f823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524011  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key ...
	I1121 14:29:39.524031  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key: {Name:mk12031f310020bd38886fd870544563c6ab1faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524255  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:39.524307  255774 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:39.524323  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:39.524353  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:39.524383  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:39.524407  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:39.524445  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:39.525071  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:39.546065  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:39.565880  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:39.585450  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:39.604394  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 14:29:39.623736  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:39.642460  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:39.661463  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:39.681314  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:39.879137  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:39.899730  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:39.918630  255774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:39.935942  255774 ssh_runner.go:195] Run: openssl version
	I1121 14:29:39.943062  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.020861  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026152  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026209  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.067681  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.077051  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.087944  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092369  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092434  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.132125  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.142255  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.152828  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157171  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157265  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.199881  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.210053  255774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.214456  255774 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.214524  255774 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.214625  255774 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.214692  255774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.249359  255774 cri.go:89] found id: ""
	I1121 14:29:40.249429  255774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.259121  255774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.270847  255774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.270910  255774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.283266  255774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.283287  255774 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.283341  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1121 14:29:40.293676  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.293725  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.303277  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.313015  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.313073  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.322086  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.330920  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.331015  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.339376  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.347984  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.348046  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.356683  255774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.404354  255774 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.404455  255774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.435448  255774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.435583  255774 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.435628  255774 kubeadm.go:319] OS: Linux
	I1121 14:29:40.435689  255774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.435827  255774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.435905  255774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.436039  255774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.436108  255774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.436176  255774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.436276  255774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.436351  255774 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.508224  255774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.508370  255774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.508531  255774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.513996  255774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:39.295828  252125 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:39.301164  252125 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:39.312709  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:39.400897  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:39.429294  252125 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956 for IP: 192.168.103.2
	I1121 14:29:39.429315  252125 certs.go:195] generating shared ca certs ...
	I1121 14:29:39.429332  252125 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.429485  252125 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:39.429583  252125 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:39.429600  252125 certs.go:257] generating profile certs ...
	I1121 14:29:39.429678  252125 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key
	I1121 14:29:39.429693  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt with IP's: []
	I1121 14:29:39.556088  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt ...
	I1121 14:29:39.556115  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: {Name:mkc697edce2d4ccb5a4a2ccbe74255aef4a205c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556297  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key ...
	I1121 14:29:39.556312  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key: {Name:mkad7b167b883af61314c3f8b6c71358edc782dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556419  252125 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d
	I1121 14:29:39.556435  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1121 14:29:39.871499  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d ...
	I1121 14:29:39.871529  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d: {Name:mkc839b1c936af809ed1159ef4599336fd260d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871726  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d ...
	I1121 14:29:39.871748  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d: {Name:mkc2f0abcac84f6547f3e0edb165e90b14fdd7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871882  252125 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt
	I1121 14:29:39.871997  252125 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key
	I1121 14:29:39.872096  252125 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key
	I1121 14:29:39.872120  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt with IP's: []
	I1121 14:29:40.083173  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt ...
	I1121 14:29:40.083201  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt: {Name:mkba7efd029f616230e0b3cf14c4f32abac0549e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083385  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key ...
	I1121 14:29:40.083414  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key: {Name:mk24f6fbb57f5dfce4a401be193e0a832a6ccf6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083661  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:40.083700  252125 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:40.083711  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:40.083749  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:40.083780  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:40.083827  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:40.083887  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:40.084653  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:40.106430  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:40.126520  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:40.148412  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:40.169973  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:29:40.191493  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:29:40.214458  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:40.234692  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:29:40.261986  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:40.352437  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:40.372804  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:40.394700  252125 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:40.411183  252125 ssh_runner.go:195] Run: openssl version
	I1121 14:29:40.419607  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.431060  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436371  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436429  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.481320  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.492797  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.502878  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507432  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507499  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.567779  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.577673  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.587826  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592472  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592528  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.627626  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.637464  252125 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.641884  252125 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.641943  252125 kubeadm.go:401] StartCluster: {Name:no-preload-921956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.642030  252125 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.642085  252125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.673351  252125 cri.go:89] found id: ""
	I1121 14:29:40.673423  252125 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.682715  252125 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.691493  252125 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.691581  252125 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.700143  252125 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.700160  252125 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.700205  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:40.708734  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.708799  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.717135  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.726191  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.726262  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.734074  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.742647  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.742709  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.751091  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.759770  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.759841  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.768253  252125 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.810825  252125 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.810892  252125 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.831836  252125 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.831940  252125 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.832026  252125 kubeadm.go:319] OS: Linux
	I1121 14:29:40.832115  252125 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.832212  252125 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.832286  252125 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.832358  252125 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.832432  252125 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.832504  252125 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.832668  252125 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.832735  252125 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.895341  252125 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.895491  252125 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.895637  252125 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.901358  252125 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.249631  249617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:37.262987  249617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:29:37.263020  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:37.283444  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:38.138719  249617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:38.138808  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.138810  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-012258 minikube.k8s.io/updated_at=2025_11_21T14_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-012258 minikube.k8s.io/primary=true
	I1121 14:29:38.150782  249617 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:38.225220  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.726231  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.225533  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.725591  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.225601  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.725734  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:41.226112  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.521190  255774 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.521325  255774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.521431  255774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.003970  255774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.240665  255774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.425685  255774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:41.689428  255774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:41.923373  255774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:41.923563  255774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.051973  255774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.052979  255774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.277531  255774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:42.491572  255774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:42.605458  255774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:42.605535  255774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:42.870659  255774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:43.039072  255774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:43.228611  255774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:43.489903  255774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:43.563271  255774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:43.563948  255774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:43.568453  255774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:39.727688  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:39.728083  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:39.728134  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:39.728197  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:39.758413  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:39.758436  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:39.758441  213058 cri.go:89] found id: ""
	I1121 14:29:39.758452  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:39.758508  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.763439  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.767912  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:39.767980  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:39.802923  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:39.802948  213058 cri.go:89] found id: ""
	I1121 14:29:39.802957  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:39.803013  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.807778  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:39.807853  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:39.835286  213058 cri.go:89] found id: ""
	I1121 14:29:39.835314  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.835335  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:39.835343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:39.835408  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:39.864986  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:39.865034  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:39.865040  213058 cri.go:89] found id: ""
	I1121 14:29:39.865050  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:39.865105  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.869441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.873676  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:39.873739  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:39.902671  213058 cri.go:89] found id: ""
	I1121 14:29:39.902698  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.902707  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:39.902715  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:39.902762  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:39.933452  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:39.933477  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:39.933483  213058 cri.go:89] found id: ""
	I1121 14:29:39.933492  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:39.933557  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.938051  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.942029  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:39.942094  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:39.969991  213058 cri.go:89] found id: ""
	I1121 14:29:39.970018  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.970028  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:39.970036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:39.970086  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:39.997381  213058 cri.go:89] found id: ""
	I1121 14:29:39.997406  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.997417  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:39.997429  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:39.997443  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:40.027188  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:40.027213  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:40.067878  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:40.067906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:40.101358  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:40.101388  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:40.115674  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:40.115704  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:40.153845  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:40.153871  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:40.188913  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:40.188944  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:40.244995  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:40.245033  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:40.351506  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:40.351558  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:40.417221  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:40.417244  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:40.417263  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:40.457789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:40.457836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.520712  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:40.520748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.056648  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:43.057094  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:43.057150  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:43.057204  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:43.085236  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.085260  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.085265  213058 cri.go:89] found id: ""
	I1121 14:29:43.085275  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:43.085333  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.089868  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.094074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:43.094134  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:43.122420  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.122447  213058 cri.go:89] found id: ""
	I1121 14:29:43.122457  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:43.122512  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.126830  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:43.126892  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:43.156518  213058 cri.go:89] found id: ""
	I1121 14:29:43.156566  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.156577  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:43.156584  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:43.156646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:43.185212  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:43.185233  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.185238  213058 cri.go:89] found id: ""
	I1121 14:29:43.185277  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:43.185338  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.190000  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.194074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:43.194131  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:43.224175  213058 cri.go:89] found id: ""
	I1121 14:29:43.224201  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.224211  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:43.224218  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:43.224277  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:43.258260  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:43.258292  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.258299  213058 cri.go:89] found id: ""
	I1121 14:29:43.258310  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:43.258378  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.263276  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.268195  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:43.268264  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:43.303269  213058 cri.go:89] found id: ""
	I1121 14:29:43.303300  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.303311  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:43.303319  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:43.303379  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:43.333956  213058 cri.go:89] found id: ""
	I1121 14:29:43.333985  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.333995  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:43.334007  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:43.334021  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:43.366338  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:43.366369  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:43.458987  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:43.459027  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.497960  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:43.497995  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.539997  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:43.540035  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.575882  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:43.575911  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.903405  252125 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.903502  252125 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.903630  252125 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.180390  252125 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.211121  252125 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.523007  252125 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:42.461521  252125 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:42.641495  252125 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:42.641701  252125 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.773640  252125 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.773843  252125 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.921369  252125 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:43.256203  252125 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:43.834470  252125 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:43.834645  252125 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:43.949422  252125 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:44.093777  252125 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:44.227287  252125 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:44.509482  252125 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:44.696294  252125 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:44.696767  252125 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:44.705846  252125 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:43.573374  255774 out.go:252]   - Booting up control plane ...
	I1121 14:29:43.573510  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:43.573669  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:43.573781  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:43.590344  255774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:43.590494  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:43.599838  255774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:43.600184  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:43.600247  255774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:43.720721  255774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:43.720878  255774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:44.721899  255774 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001196965s
	I1121 14:29:44.724830  255774 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:44.724972  255774 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1121 14:29:44.725131  255774 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:44.725253  255774 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:41.726266  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.225460  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.725727  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.225740  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.725669  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.225350  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.725651  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.226025  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.725289  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:46.226316  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.632243  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:43.632278  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.681909  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:43.681959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.723402  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:43.723454  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:43.776606  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:43.776641  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:43.793171  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:43.793200  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:43.854264  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:43.854293  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:43.854308  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.383659  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:46.384075  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:46.384128  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:46.384191  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:46.441629  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.441734  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:46.441754  213058 cri.go:89] found id: ""
	I1121 14:29:46.441776  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:46.441873  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.447714  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.453337  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:46.453422  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:46.497451  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.497475  213058 cri.go:89] found id: ""
	I1121 14:29:46.497485  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:46.497585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.504731  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:46.504801  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:46.562972  213058 cri.go:89] found id: ""
	I1121 14:29:46.563014  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.563027  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:46.563036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:46.563287  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:46.611186  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:46.611216  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:46.611221  213058 cri.go:89] found id: ""
	I1121 14:29:46.611231  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:46.611289  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.620404  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.626388  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:46.626559  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:46.674192  213058 cri.go:89] found id: ""
	I1121 14:29:46.674247  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.674259  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:46.674267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:46.674448  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:46.749738  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.749765  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:46.749771  213058 cri.go:89] found id: ""
	I1121 14:29:46.749780  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:46.749835  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.756273  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.763986  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:46.764120  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:46.811858  213058 cri.go:89] found id: ""
	I1121 14:29:46.811883  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.811901  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:46.811909  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:46.811963  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:46.849599  213058 cri.go:89] found id: ""
	I1121 14:29:46.849645  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.849655  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:46.849666  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:46.849683  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.913988  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:46.914024  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.953189  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:46.953227  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:47.001663  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:47.001705  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:47.041106  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:47.041137  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:47.107673  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:47.107712  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:47.240432  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:47.240473  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:47.288852  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:47.288894  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:46.531314  255774 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.80645272s
	I1121 14:29:47.509316  255774 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.784421033s
	I1121 14:29:49.226647  255774 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501794549s
	I1121 14:29:49.239409  255774 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:49.252719  255774 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:49.264076  255774 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:49.264371  255774 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-376255 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:49.274799  255774 kubeadm.go:319] [bootstrap-token] Using token: 8nwcfl.9utqukqcvuro6a4p
	I1121 14:29:44.769338  252125 out.go:252]   - Booting up control plane ...
	I1121 14:29:44.769476  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:44.769652  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:44.769771  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:44.769940  252125 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:44.770087  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:44.778391  252125 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:44.779655  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:44.779729  252125 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:44.894196  252125 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:44.894364  252125 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:45.895053  252125 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000974959s
	I1121 14:29:45.898754  252125 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:45.898875  252125 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1121 14:29:45.899003  252125 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:45.899149  252125 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:48.621169  252125 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.722350043s
	I1121 14:29:49.059709  252125 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.160801257s
	I1121 14:29:49.276414  255774 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:49.276590  255774 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:49.280532  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:49.287374  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:49.290401  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:49.293308  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:49.297552  255774 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:49.632747  255774 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:46.726037  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.228665  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.725338  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.226199  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.725959  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.225812  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.725337  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.225293  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.310282  249617 kubeadm.go:1114] duration metric: took 12.17154172s to wait for elevateKubeSystemPrivileges
	I1121 14:29:50.310322  249617 kubeadm.go:403] duration metric: took 23.370802852s to StartCluster
	I1121 14:29:50.310347  249617 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.310438  249617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:50.311864  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.312167  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:50.312169  249617 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:50.312267  249617 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:50.312352  249617 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312372  249617 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-012258"
	I1121 14:29:50.312403  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.312458  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:50.312516  249617 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312530  249617 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-012258"
	I1121 14:29:50.312827  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.312965  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.314603  249617 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:50.316238  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:50.339724  249617 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:50.056893  255774 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:50.634602  255774 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:50.635720  255774 kubeadm.go:319] 
	I1121 14:29:50.635840  255774 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:50.635916  255774 kubeadm.go:319] 
	I1121 14:29:50.636085  255774 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:50.636139  255774 kubeadm.go:319] 
	I1121 14:29:50.636189  255774 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:50.636300  255774 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:50.636386  255774 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:50.636448  255774 kubeadm.go:319] 
	I1121 14:29:50.636574  255774 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:50.636584  255774 kubeadm.go:319] 
	I1121 14:29:50.636647  255774 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:50.636652  255774 kubeadm.go:319] 
	I1121 14:29:50.636709  255774 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:50.636796  255774 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:50.636878  255774 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:50.636886  255774 kubeadm.go:319] 
	I1121 14:29:50.636981  255774 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:50.637083  255774 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:50.637090  255774 kubeadm.go:319] 
	I1121 14:29:50.637247  255774 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637414  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:50.637449  255774 kubeadm.go:319] 	--control-plane 
	I1121 14:29:50.637460  255774 kubeadm.go:319] 
	I1121 14:29:50.637571  255774 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:50.637580  255774 kubeadm.go:319] 
	I1121 14:29:50.637672  255774 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637785  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:50.642202  255774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:50.642513  255774 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:50.642647  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:50.642693  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:50.645524  255774 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:50.339929  249617 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-012258"
	I1121 14:29:50.339977  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.340433  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.341133  249617 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.341154  249617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:50.341208  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.377822  249617 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.377846  249617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:50.377844  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.377907  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.410483  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.415901  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:50.468678  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:50.503643  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.536480  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.667362  249617 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:50.668484  249617 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:29:50.954598  249617 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:50.401999  252125 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502477764s
	I1121 14:29:50.419850  252125 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:50.933016  252125 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:50.948821  252125 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:50.949093  252125 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-921956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:50.961417  252125 kubeadm.go:319] [bootstrap-token] Using token: uhuim0.7wh8hbt7v76eo7qs
	I1121 14:29:50.955828  249617 addons.go:530] duration metric: took 643.55365ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:51.174831  249617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-012258" context rescaled to 1 replicas
	I1121 14:29:50.963415  252125 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:50.963588  252125 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:50.971176  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:50.980644  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:50.985255  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:50.989946  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:50.994015  252125 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:51.128309  252125 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:51.550178  252125 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:52.128624  252125 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:52.129402  252125 kubeadm.go:319] 
	I1121 14:29:52.129496  252125 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:52.129528  252125 kubeadm.go:319] 
	I1121 14:29:52.129657  252125 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:52.129669  252125 kubeadm.go:319] 
	I1121 14:29:52.129705  252125 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:52.129798  252125 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:52.129906  252125 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:52.129923  252125 kubeadm.go:319] 
	I1121 14:29:52.129995  252125 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:52.130004  252125 kubeadm.go:319] 
	I1121 14:29:52.130078  252125 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:52.130087  252125 kubeadm.go:319] 
	I1121 14:29:52.130170  252125 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:52.130304  252125 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:52.130418  252125 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:52.130446  252125 kubeadm.go:319] 
	I1121 14:29:52.130574  252125 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:52.130677  252125 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:52.130685  252125 kubeadm.go:319] 
	I1121 14:29:52.130797  252125 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.130966  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:52.131000  252125 kubeadm.go:319] 	--control-plane 
	I1121 14:29:52.131035  252125 kubeadm.go:319] 
	I1121 14:29:52.131212  252125 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:52.131230  252125 kubeadm.go:319] 
	I1121 14:29:52.131343  252125 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.131485  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:52.132830  252125 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:52.132967  252125 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:52.133003  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:52.133014  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:52.134968  252125 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:52.136241  252125 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:52.141107  252125 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:52.141131  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:52.155585  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:52.395340  252125 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:52.395422  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.395526  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-921956 minikube.k8s.io/updated_at=2025_11_21T14_29_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-921956 minikube.k8s.io/primary=true
	I1121 14:29:52.481012  252125 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:52.481125  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.982198  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.481748  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.981282  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.646815  255774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:50.654615  255774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:50.654642  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:50.673887  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:50.944978  255774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:50.945143  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.945309  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-376255 minikube.k8s.io/updated_at=2025_11_21T14_29_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=default-k8s-diff-port-376255 minikube.k8s.io/primary=true
	I1121 14:29:50.960009  255774 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:51.036596  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:51.537134  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.037345  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.536941  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.037592  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.536966  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.036678  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.536697  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.037499  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.536808  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.610391  255774 kubeadm.go:1114] duration metric: took 4.665295307s to wait for elevateKubeSystemPrivileges
	I1121 14:29:55.610426  255774 kubeadm.go:403] duration metric: took 15.395907943s to StartCluster
	I1121 14:29:55.610448  255774 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.610511  255774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:55.612071  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.612346  255774 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:55.612498  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:55.612612  255774 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:55.612696  255774 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612713  255774 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.612745  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.612775  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:55.612835  255774 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612852  255774 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376255"
	I1121 14:29:55.613218  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613392  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613476  255774 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:55.615420  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:55.641842  255774 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.641893  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.642317  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.647007  255774 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:55.648771  255774 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.648807  255774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:55.648882  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.679690  255774 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.679713  255774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:55.679780  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.680868  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.703091  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.713751  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:55.781953  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:55.795189  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.811872  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.895061  255774 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:55.896386  255774 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:29:56.162438  255774 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:29:52.672645  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:55.172665  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:29:54.481750  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.981303  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.481778  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.981846  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.481336  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.981822  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:57.056720  252125 kubeadm.go:1114] duration metric: took 4.66135199s to wait for elevateKubeSystemPrivileges
	I1121 14:29:57.056760  252125 kubeadm.go:403] duration metric: took 16.414821557s to StartCluster
	I1121 14:29:57.056783  252125 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.056866  252125 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:57.059279  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.059591  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:57.059595  252125 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:57.059668  252125 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:57.059755  252125 addons.go:70] Setting storage-provisioner=true in profile "no-preload-921956"
	I1121 14:29:57.059780  252125 addons.go:239] Setting addon storage-provisioner=true in "no-preload-921956"
	I1121 14:29:57.059783  252125 addons.go:70] Setting default-storageclass=true in profile "no-preload-921956"
	I1121 14:29:57.059810  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.059818  252125 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:57.059810  252125 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-921956"
	I1121 14:29:57.060267  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.060366  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.061615  252125 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:57.063049  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:57.087511  252125 addons.go:239] Setting addon default-storageclass=true in "no-preload-921956"
	I1121 14:29:57.087574  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.088046  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.088842  252125 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:57.090553  252125 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.090577  252125 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:57.090634  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.113518  252125 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.113567  252125 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:57.113644  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.116604  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.140626  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.162241  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:57.221336  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:57.237060  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.259845  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.393470  252125 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:57.394577  252125 node_ready.go:35] waiting up to 6m0s for node "no-preload-921956" to be "Ready" ...
	I1121 14:29:57.623024  252125 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:57.414885  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.125971322s)
	W1121 14:29:57.414929  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1121 14:29:57.414939  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:57.414952  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:57.462838  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:57.462881  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:57.526637  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:57.526671  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:57.574224  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:57.574259  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:57.624430  252125 addons.go:530] duration metric: took 564.759261ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:57.898009  252125 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-921956" context rescaled to 1 replicas
	I1121 14:29:56.163632  255774 addons.go:530] duration metric: took 551.031985ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:56.399602  255774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-376255" context rescaled to 1 replicas
	W1121 14:29:57.899680  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:29:57.174208  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:59.672116  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:00.114035  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1121 14:29:59.398191  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:01.898360  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:29:59.900344  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.900816  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:04.400331  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.672252  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:30:04.171805  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:05.672011  249617 node_ready.go:49] node "old-k8s-version-012258" is "Ready"
	I1121 14:30:05.672046  249617 node_ready.go:38] duration metric: took 15.003519412s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:30:05.672064  249617 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:05.672125  249617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:05.689799  249617 api_server.go:72] duration metric: took 15.377593574s to wait for apiserver process to appear ...
	I1121 14:30:05.689974  249617 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:05.690001  249617 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:30:05.696217  249617 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:30:05.697950  249617 api_server.go:141] control plane version: v1.28.0
	I1121 14:30:05.697978  249617 api_server.go:131] duration metric: took 7.994891ms to wait for apiserver health ...
	I1121 14:30:05.697990  249617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:05.702726  249617 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:05.702769  249617 system_pods.go:61] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.702778  249617 system_pods.go:61] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.702785  249617 system_pods.go:61] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.702796  249617 system_pods.go:61] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.702808  249617 system_pods.go:61] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.702818  249617 system_pods.go:61] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.702822  249617 system_pods.go:61] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.702829  249617 system_pods.go:61] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.702837  249617 system_pods.go:74] duration metric: took 4.84094ms to wait for pod list to return data ...
	I1121 14:30:05.702852  249617 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:05.705127  249617 default_sa.go:45] found service account: "default"
	I1121 14:30:05.705151  249617 default_sa.go:55] duration metric: took 2.290103ms for default service account to be created ...
	I1121 14:30:05.705161  249617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:05.710235  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.710318  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.710330  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.710337  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.710367  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.710374  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.710380  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.710385  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.710404  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.710597  249617 retry.go:31] will retry after 257.065607ms: missing components: kube-dns
	I1121 14:30:05.972608  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.972648  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.972657  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.972665  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.972676  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.972682  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.972687  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.972692  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.972707  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.972726  249617 retry.go:31] will retry after 339.692313ms: missing components: kube-dns
	I1121 14:30:06.317124  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:06.317155  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Running
	I1121 14:30:06.317160  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:06.317163  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:06.317167  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:06.317171  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:06.317175  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:06.317178  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:06.317181  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Running
	I1121 14:30:06.317188  249617 system_pods.go:126] duration metric: took 612.020803ms to wait for k8s-apps to be running ...
	I1121 14:30:06.317194  249617 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:06.317250  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:06.332295  249617 system_svc.go:56] duration metric: took 15.088564ms WaitForService to wait for kubelet
	I1121 14:30:06.332331  249617 kubeadm.go:587] duration metric: took 16.020134285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:06.332357  249617 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:06.338044  249617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:06.338071  249617 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:06.338084  249617 node_conditions.go:105] duration metric: took 5.72136ms to run NodePressure ...
	I1121 14:30:06.338096  249617 start.go:242] waiting for startup goroutines ...
	I1121 14:30:06.338102  249617 start.go:247] waiting for cluster config update ...
	I1121 14:30:06.338113  249617 start.go:256] writing updated cluster config ...
	I1121 14:30:06.338382  249617 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:06.342534  249617 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:06.347323  249617 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.352062  249617 pod_ready.go:94] pod "coredns-5dd5756b68-vst4c" is "Ready"
	I1121 14:30:06.352087  249617 pod_ready.go:86] duration metric: took 4.697932ms for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.354946  249617 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.359326  249617 pod_ready.go:94] pod "etcd-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.359355  249617 pod_ready.go:86] duration metric: took 4.388182ms for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.362007  249617 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.366060  249617 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.366081  249617 pod_ready.go:86] duration metric: took 4.051984ms for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.368789  249617 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.746914  249617 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.746952  249617 pod_ready.go:86] duration metric: took 378.141903ms for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.947790  249617 pod_ready.go:83] waiting for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.347266  249617 pod_ready.go:94] pod "kube-proxy-wsp2w" is "Ready"
	I1121 14:30:07.347291  249617 pod_ready.go:86] duration metric: took 399.477159ms for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.547233  249617 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946728  249617 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-012258" is "Ready"
	I1121 14:30:07.946756  249617 pod_ready.go:86] duration metric: took 399.500525ms for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946772  249617 pod_ready.go:40] duration metric: took 1.604187461s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.009909  249617 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1121 14:30:08.014607  249617 out.go:203] 
	W1121 14:30:08.016075  249617 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:30:08.020782  249617 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:30:08.022622  249617 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-012258" cluster and "default" namespace by default
	I1121 14:30:05.115052  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1121 14:30:05.115115  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:05.115188  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:05.143819  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.143839  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.143843  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:05.143846  213058 cri.go:89] found id: ""
	I1121 14:30:05.143853  213058 logs.go:282] 3 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:05.143912  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.148585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.152984  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.156944  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:05.157004  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:05.185404  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.185430  213058 cri.go:89] found id: ""
	I1121 14:30:05.185440  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:05.185498  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.190360  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:05.190432  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:05.222964  213058 cri.go:89] found id: ""
	I1121 14:30:05.222989  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.222999  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:05.223006  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:05.223058  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:05.254414  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:05.254436  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:05.254440  213058 cri.go:89] found id: ""
	I1121 14:30:05.254447  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:05.254505  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.258766  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.262456  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:05.262524  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:05.288454  213058 cri.go:89] found id: ""
	I1121 14:30:05.288486  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.288496  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:05.288505  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:05.288598  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:05.317814  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:05.317841  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:05.317847  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.317851  213058 cri.go:89] found id: ""
	I1121 14:30:05.317861  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:05.317930  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.322506  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.326684  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.330828  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:05.330957  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:05.360073  213058 cri.go:89] found id: ""
	I1121 14:30:05.360098  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.360107  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:05.360116  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:05.360171  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:05.388524  213058 cri.go:89] found id: ""
	I1121 14:30:05.388561  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.388573  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:05.388587  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:05.388602  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.427247  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:05.427279  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:05.517583  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:05.517615  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.556205  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:30:05.556238  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.601637  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:05.601692  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.642125  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:05.642167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:05.707252  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:05.707295  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:05.747947  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:05.747990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:05.767646  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:05.767678  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:04.398534  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.897181  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:08.897492  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.900285  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	I1121 14:30:07.400113  255774 node_ready.go:49] node "default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:07.400148  255774 node_ready.go:38] duration metric: took 11.503726167s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:30:07.400166  255774 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:07.400227  255774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:07.416428  255774 api_server.go:72] duration metric: took 11.804040955s to wait for apiserver process to appear ...
	I1121 14:30:07.416462  255774 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:07.416487  255774 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 14:30:07.423355  255774 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 14:30:07.424441  255774 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:07.424471  255774 api_server.go:131] duration metric: took 8.001103ms to wait for apiserver health ...
	I1121 14:30:07.424480  255774 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:07.428816  255774 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:07.428856  255774 system_pods.go:61] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.428866  255774 system_pods.go:61] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.428874  255774 system_pods.go:61] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.428880  255774 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.428886  255774 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.428891  255774 system_pods.go:61] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.428899  255774 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.428912  255774 system_pods.go:61] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.428921  255774 system_pods.go:74] duration metric: took 4.433771ms to wait for pod list to return data ...
	I1121 14:30:07.428932  255774 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:07.431771  255774 default_sa.go:45] found service account: "default"
	I1121 14:30:07.431794  255774 default_sa.go:55] duration metric: took 2.856811ms for default service account to be created ...
	I1121 14:30:07.431804  255774 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:07.435787  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.435816  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.435821  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.435826  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.435830  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.435833  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.435836  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.435841  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.435846  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.435871  255774 retry.go:31] will retry after 217.060579ms: missing components: kube-dns
	I1121 14:30:07.656900  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.656930  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.656937  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.656945  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.656950  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.656955  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.656959  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.656964  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.656970  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.656989  255774 retry.go:31] will retry after 330.648304ms: missing components: kube-dns
	I1121 14:30:07.995514  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.995612  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.995626  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.995636  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.995642  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.995653  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.995659  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.995664  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.995683  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.995713  255774 retry.go:31] will retry after 466.383408ms: missing components: kube-dns
	I1121 14:30:08.466385  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:08.466414  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Running
	I1121 14:30:08.466419  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:08.466423  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:08.466427  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:08.466430  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:08.466435  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:08.466438  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:08.466441  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Running
	I1121 14:30:08.466448  255774 system_pods.go:126] duration metric: took 1.034639333s to wait for k8s-apps to be running ...
	I1121 14:30:08.466454  255774 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:08.466495  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:08.480058  255774 system_svc.go:56] duration metric: took 13.59071ms WaitForService to wait for kubelet
	I1121 14:30:08.480087  255774 kubeadm.go:587] duration metric: took 12.867708638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:08.480104  255774 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:08.483054  255774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:08.483077  255774 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:08.483089  255774 node_conditions.go:105] duration metric: took 2.980591ms to run NodePressure ...
	I1121 14:30:08.483101  255774 start.go:242] waiting for startup goroutines ...
	I1121 14:30:08.483107  255774 start.go:247] waiting for cluster config update ...
	I1121 14:30:08.483116  255774 start.go:256] writing updated cluster config ...
	I1121 14:30:08.483378  255774 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:08.487457  255774 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.490869  255774 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.495613  255774 pod_ready.go:94] pod "coredns-66bc5c9577-fr27b" is "Ready"
	I1121 14:30:08.495638  255774 pod_ready.go:86] duration metric: took 4.745112ms for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.498070  255774 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.502098  255774 pod_ready.go:94] pod "etcd-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.502122  255774 pod_ready.go:86] duration metric: took 4.029361ms for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.504276  255774 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.508229  255774 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.508250  255774 pod_ready.go:86] duration metric: took 3.957821ms for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.510387  255774 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.891344  255774 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.891369  255774 pod_ready.go:86] duration metric: took 380.959206ms for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.091636  255774 pod_ready.go:83] waiting for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.492078  255774 pod_ready.go:94] pod "kube-proxy-hdplf" is "Ready"
	I1121 14:30:09.492108  255774 pod_ready.go:86] duration metric: took 400.444722ms for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.693278  255774 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092105  255774 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:10.092133  255774 pod_ready.go:86] duration metric: took 398.824976ms for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092146  255774 pod_ready.go:40] duration metric: took 1.604655578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:10.138628  255774 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:10.140593  255774 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-376255" cluster and "default" namespace by default
	I1121 14:30:08.754284  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.986586875s)
	W1121 14:30:08.754342  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1121 14:30:08.754352  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:08.754366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:08.789119  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:08.789149  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:08.842933  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:08.842974  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:08.880878  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:08.880919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:08.910920  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:08.910953  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.440020  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:11.440496  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:11.440556  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:11.440601  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:11.472645  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:11.472669  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:11.472674  213058 cri.go:89] found id: ""
	I1121 14:30:11.472683  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:11.472748  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.478061  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.482946  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:11.483034  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:11.517693  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:11.517722  213058 cri.go:89] found id: ""
	I1121 14:30:11.517732  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:11.517797  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.523621  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:11.523699  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:11.559155  213058 cri.go:89] found id: ""
	I1121 14:30:11.559194  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.559204  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:11.559212  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:11.559271  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:11.595093  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.595127  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:11.595133  213058 cri.go:89] found id: ""
	I1121 14:30:11.595143  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:11.595194  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.600085  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.604973  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:11.605048  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:11.639606  213058 cri.go:89] found id: ""
	I1121 14:30:11.639636  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.639647  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:11.639653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:11.639713  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:11.684373  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.684400  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.684405  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.684410  213058 cri.go:89] found id: ""
	I1121 14:30:11.684421  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:11.684482  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.689732  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.695253  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.701315  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:11.701388  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:11.732802  213058 cri.go:89] found id: ""
	I1121 14:30:11.732831  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.732841  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:11.732848  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:11.732907  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:11.761686  213058 cri.go:89] found id: ""
	I1121 14:30:11.761717  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.761729  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:11.761741  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:11.761756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.816634  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:11.816670  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.846024  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:11.846055  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.876932  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:11.876964  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.912984  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:11.913018  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:11.965381  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:11.965423  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:11.997477  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:11.997509  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:12.011497  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:12.011524  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:12.071024  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:12.071049  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:12.071065  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:12.106865  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:12.106898  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:12.141245  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:12.141276  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:12.176551  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:12.176600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:12.268742  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:12.268780  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	W1121 14:30:10.897620  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	I1121 14:30:11.398100  252125 node_ready.go:49] node "no-preload-921956" is "Ready"
	I1121 14:30:11.398128  252125 node_ready.go:38] duration metric: took 14.003530083s for node "no-preload-921956" to be "Ready" ...
	I1121 14:30:11.398142  252125 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:11.398195  252125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:11.412043  252125 api_server.go:72] duration metric: took 14.35241025s to wait for apiserver process to appear ...
	I1121 14:30:11.412070  252125 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:11.412087  252125 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:30:11.417254  252125 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:30:11.418517  252125 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:11.418570  252125 api_server.go:131] duration metric: took 6.492303ms to wait for apiserver health ...
	I1121 14:30:11.418581  252125 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:11.421927  252125 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:11.422024  252125 system_pods.go:61] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.422034  252125 system_pods.go:61] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.422047  252125 system_pods.go:61] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.422059  252125 system_pods.go:61] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.422069  252125 system_pods.go:61] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.422073  252125 system_pods.go:61] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.422077  252125 system_pods.go:61] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.422082  252125 system_pods.go:61] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.422094  252125 system_pods.go:74] duration metric: took 3.505153ms to wait for pod list to return data ...
	I1121 14:30:11.422109  252125 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:11.424685  252125 default_sa.go:45] found service account: "default"
	I1121 14:30:11.424710  252125 default_sa.go:55] duration metric: took 2.591611ms for default service account to be created ...
	I1121 14:30:11.424722  252125 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:11.427627  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.427680  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.427689  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.427703  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.427713  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.427721  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.427726  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.427731  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.427737  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.427768  252125 retry.go:31] will retry after 234.428318ms: missing components: kube-dns
	I1121 14:30:11.669788  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.669831  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.669840  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.669850  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.669858  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.669865  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.669871  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.669877  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.669893  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.669919  252125 retry.go:31] will retry after 250.085803ms: missing components: kube-dns
	I1121 14:30:11.924517  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.924602  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.924614  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.924627  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.924633  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.924642  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.924647  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.924653  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.924661  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.924682  252125 retry.go:31] will retry after 441.862758ms: missing components: kube-dns
	I1121 14:30:12.371065  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.371110  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:12.371122  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.371131  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.371136  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.371142  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.371147  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.371158  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.371170  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:12.371189  252125 retry.go:31] will retry after 502.578888ms: missing components: kube-dns
	I1121 14:30:12.879209  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.879243  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Running
	I1121 14:30:12.879249  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.879253  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.879258  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.879268  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.879271  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.879275  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.879278  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Running
	I1121 14:30:12.879289  252125 system_pods.go:126] duration metric: took 1.454561179s to wait for k8s-apps to be running ...
	I1121 14:30:12.879301  252125 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:12.879351  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:12.894061  252125 system_svc.go:56] duration metric: took 14.74714ms WaitForService to wait for kubelet
	I1121 14:30:12.894092  252125 kubeadm.go:587] duration metric: took 15.834465857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:12.894115  252125 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:12.897599  252125 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:12.897630  252125 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:12.897641  252125 node_conditions.go:105] duration metric: took 3.520753ms to run NodePressure ...
	I1121 14:30:12.897652  252125 start.go:242] waiting for startup goroutines ...
	I1121 14:30:12.897659  252125 start.go:247] waiting for cluster config update ...
	I1121 14:30:12.897669  252125 start.go:256] writing updated cluster config ...
	I1121 14:30:12.897983  252125 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:12.902897  252125 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:12.906562  252125 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.912263  252125 pod_ready.go:94] pod "coredns-66bc5c9577-s4rzb" is "Ready"
	I1121 14:30:12.912286  252125 pod_ready.go:86] duration metric: took 5.702456ms for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.915190  252125 pod_ready.go:83] waiting for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.919870  252125 pod_ready.go:94] pod "etcd-no-preload-921956" is "Ready"
	I1121 14:30:12.919896  252125 pod_ready.go:86] duration metric: took 4.68423ms for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.921926  252125 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.925984  252125 pod_ready.go:94] pod "kube-apiserver-no-preload-921956" is "Ready"
	I1121 14:30:12.926012  252125 pod_ready.go:86] duration metric: took 4.065762ms for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.928283  252125 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.307608  252125 pod_ready.go:94] pod "kube-controller-manager-no-preload-921956" is "Ready"
	I1121 14:30:13.307639  252125 pod_ready.go:86] duration metric: took 379.335151ms for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.508229  252125 pod_ready.go:83] waiting for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.907070  252125 pod_ready.go:94] pod "kube-proxy-wmx7z" is "Ready"
	I1121 14:30:13.907101  252125 pod_ready.go:86] duration metric: took 398.843128ms for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.108040  252125 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507264  252125 pod_ready.go:94] pod "kube-scheduler-no-preload-921956" is "Ready"
	I1121 14:30:14.507293  252125 pod_ready.go:86] duration metric: took 399.219492ms for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507307  252125 pod_ready.go:40] duration metric: took 1.604362709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:14.554506  252125 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:14.556366  252125 out.go:179] * Done! kubectl is now configured to use "no-preload-921956" cluster and "default" namespace by default
	I1121 14:30:14.802507  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:14.803048  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:14.803100  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:14.803156  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:14.832438  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:14.832464  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:14.832469  213058 cri.go:89] found id: ""
	I1121 14:30:14.832479  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:14.832560  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.836869  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.840970  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:14.841027  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:14.869276  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:14.869297  213058 cri.go:89] found id: ""
	I1121 14:30:14.869306  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:14.869364  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.873530  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:14.873616  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:14.902293  213058 cri.go:89] found id: ""
	I1121 14:30:14.902325  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.902336  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:14.902343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:14.902396  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:14.931422  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:14.931444  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:14.931448  213058 cri.go:89] found id: ""
	I1121 14:30:14.931455  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:14.931507  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.936188  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.940673  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:14.940742  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:14.969277  213058 cri.go:89] found id: ""
	I1121 14:30:14.969308  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.969320  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:14.969328  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:14.969386  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:14.999162  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:14.999190  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:14.999195  213058 cri.go:89] found id: ""
	I1121 14:30:14.999209  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:14.999275  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.003627  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.008044  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:15.008149  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:15.036025  213058 cri.go:89] found id: ""
	I1121 14:30:15.036050  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.036061  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:15.036069  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:15.036123  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:15.064814  213058 cri.go:89] found id: ""
	I1121 14:30:15.064840  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.064851  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:15.064863  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:15.064877  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:15.105369  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:15.105412  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:15.145479  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:15.145521  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:15.186460  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:15.186498  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:15.233156  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:15.233196  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:15.328776  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:15.328824  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:15.343510  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:15.343556  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:15.375919  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:15.375959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:15.412267  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:15.412310  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:15.467388  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:15.467422  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:15.495400  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:15.495451  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:15.527880  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:15.527906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:15.589380  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.090626  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:18.091055  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:18.091106  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:18.091154  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:18.119750  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.119777  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:18.119781  213058 cri.go:89] found id: ""
	I1121 14:30:18.119788  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:18.119846  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.124441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.128481  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:18.128574  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:18.155968  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:18.155990  213058 cri.go:89] found id: ""
	I1121 14:30:18.156000  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:18.156056  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.160457  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:18.160529  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:18.191869  213058 cri.go:89] found id: ""
	I1121 14:30:18.191899  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.191909  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:18.191916  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:18.191990  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:18.222614  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:18.222639  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.222644  213058 cri.go:89] found id: ""
	I1121 14:30:18.222653  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:18.222710  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.227248  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.231976  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:18.232054  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:18.261651  213058 cri.go:89] found id: ""
	I1121 14:30:18.261686  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.261696  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:18.261703  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:18.261756  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:18.293248  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:18.293277  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:18.293283  213058 cri.go:89] found id: ""
	I1121 14:30:18.293291  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:18.293360  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.297988  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.302375  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:18.302444  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:18.331900  213058 cri.go:89] found id: ""
	I1121 14:30:18.331976  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.331989  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:18.331997  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:18.332053  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:18.362314  213058 cri.go:89] found id: ""
	I1121 14:30:18.362341  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.362351  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:18.362363  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:18.362378  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.401362  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:18.401403  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:18.453554  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:18.453597  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:18.470719  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:18.470750  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:18.535220  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.535241  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:18.535255  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.572460  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:18.572490  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8c4937852627b       56cc512116c8f       6 seconds ago       Running             busybox                   0                   55e524b70455d       busybox                                     default
	f0247ece715b4       52546a367cc9e       12 seconds ago      Running             coredns                   0                   9cde47ebfdaa9       coredns-66bc5c9577-s4rzb                    kube-system
	e791a48ad06a8       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   f3b466e434694       storage-provisioner                         kube-system
	eac07ec6addf2       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   4141af88e24d8       kindnet-kf24h                               kube-system
	3dad3f2e239b1       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   7397f89f7a39e       kube-proxy-wmx7z                            kube-system
	1cd8f6c5ba170       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   c6ae47a54c220       etcd-no-preload-921956                      kube-system
	dceea14c3e55c       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   c7aa7d1c46c19       kube-scheduler-no-preload-921956            kube-system
	bc0261d84f559       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   773140ae1c786       kube-controller-manager-no-preload-921956   kube-system
	1477917e1b2ba       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   9ce03a4904943       kube-apiserver-no-preload-921956            kube-system
	
	
	==> containerd <==
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.596124481Z" level=info msg="Container f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.599105052Z" level=info msg="CreateContainer within sandbox \"f3b466e43469423250b24f5b0c583a3d95b0b05abfa084da3a0674a3b91b7692\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f\""
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.599787885Z" level=info msg="StartContainer for \"e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f\""
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.600927107Z" level=info msg="connecting to shim e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f" address="unix:///run/containerd/s/bba5dae34a16be5c8ec0d6ba65f8dc232accb717c30abd045510178f2ece1097" protocol=ttrpc version=3
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.605360819Z" level=info msg="CreateContainer within sandbox \"9cde47ebfdaa9dc6352c0279f0ef10eb6bc8edbda3437a2be73e3f941df07baa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961\""
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.606044693Z" level=info msg="StartContainer for \"f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961\""
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.607233194Z" level=info msg="connecting to shim f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961" address="unix:///run/containerd/s/02eaeb044cebb741c9be7dd0480408b231479620953f130c4ea28518fb0c35e1" protocol=ttrpc version=3
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.659637183Z" level=info msg="StartContainer for \"e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f\" returns successfully"
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.668393940Z" level=info msg="StartContainer for \"f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961\" returns successfully"
	Nov 21 14:30:15 no-preload-921956 containerd[656]: time="2025-11-21T14:30:15.034110596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:73c5bb38-ca7b-4848-93a8-0622f9c1292e,Namespace:default,Attempt:0,}"
	Nov 21 14:30:15 no-preload-921956 containerd[656]: time="2025-11-21T14:30:15.084373462Z" level=info msg="connecting to shim 55e524b70455dae1bc437f826bd01d57b2251dbf52109d5dcb25d763ab0edb06" address="unix:///run/containerd/s/4feb4fb31aa5a0c32168b8915d9839ae11cc3ce53dd9bee66d84fc9395ffbfd9" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:30:15 no-preload-921956 containerd[656]: time="2025-11-21T14:30:15.168491854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:73c5bb38-ca7b-4848-93a8-0622f9c1292e,Namespace:default,Attempt:0,} returns sandbox id \"55e524b70455dae1bc437f826bd01d57b2251dbf52109d5dcb25d763ab0edb06\""
	Nov 21 14:30:15 no-preload-921956 containerd[656]: time="2025-11-21T14:30:15.170616417Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.263294009Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.264248753Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.265905308Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.268366369Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.268840479Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.098179974s"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.268878879Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.273637858Z" level=info msg="CreateContainer within sandbox \"55e524b70455dae1bc437f826bd01d57b2251dbf52109d5dcb25d763ab0edb06\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.283313422Z" level=info msg="Container 8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.290682241Z" level=info msg="CreateContainer within sandbox \"55e524b70455dae1bc437f826bd01d57b2251dbf52109d5dcb25d763ab0edb06\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d\""
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.291389237Z" level=info msg="StartContainer for \"8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d\""
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.292566937Z" level=info msg="connecting to shim 8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d" address="unix:///run/containerd/s/4feb4fb31aa5a0c32168b8915d9839ae11cc3ce53dd9bee66d84fc9395ffbfd9" protocol=ttrpc version=3
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.356834059Z" level=info msg="StartContainer for \"8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d\" returns successfully"
	
	
	==> coredns [f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35123 - 15966 "HINFO IN 8318159525879143492.5771029268899257213. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016728094s
	
	
	==> describe nodes <==
	Name:               no-preload-921956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-921956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-921956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_29_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-921956
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:30:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:30:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-921956
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                1dcac8a0-c5fe-4b74-ba51-ed10e93db1e4
	  Boot ID:                    f900700b-0668-4d24-87ff-85e15fbda365
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-s4rzb                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-921956                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-kf24h                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-921956             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-921956    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-wmx7z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-921956             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node no-preload-921956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node no-preload-921956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 39s)  kubelet          Node no-preload-921956 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node no-preload-921956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node no-preload-921956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node no-preload-921956 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node no-preload-921956 event: Registered Node no-preload-921956 in Controller
	  Normal  NodeReady                13s                kubelet          Node no-preload-921956 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001887] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.440508] i8042: Warning: Keylock active
	[  +0.011202] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526419] block sda: the capability attribute has been deprecated.
	[  +0.095215] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027093] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.485024] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [1cd8f6c5ba170d50593b90924ece3788f3f7ca38f69386bcb4ca7460314ee602] <==
	{"level":"warn","ts":"2025-11-21T14:29:47.878895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.887804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.894410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.900867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.909263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.920479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.927845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.934193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.940976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.950726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.958627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.964786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.972333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.979064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.985577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.993352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.999726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.006386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.014105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.022067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.028438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.045226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.052092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.058771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.113695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56782","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:30:24 up  1:12,  0 user,  load average: 4.09, 3.08, 1.94
	Linux no-preload-921956 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eac07ec6addf2c3febabe11770b0db6eabded99628063a2320ab08d5aa9cdd49] <==
	I1121 14:30:00.835156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:30:00.835421       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1121 14:30:00.835585       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:30:00.835625       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:30:00.835654       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:30:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:30:01.041084       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:30:01.041134       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:30:01.041147       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:30:01.041272       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:30:01.432758       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:30:01.432792       1 metrics.go:72] Registering metrics
	I1121 14:30:01.432861       1 controller.go:711] "Syncing nftables rules"
	I1121 14:30:11.041897       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1121 14:30:11.041946       1 main.go:301] handling current node
	I1121 14:30:21.043666       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1121 14:30:21.043701       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1477917e1b2ba485a1dafbeed3092c99981ab3ad1049c6edfeaa40700522baa0] <==
	I1121 14:29:48.663816       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:29:48.666143       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:29:48.667239       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:48.667302       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:29:48.672625       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:48.673341       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:29:48.847047       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:29:49.552186       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:29:49.556456       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:29:49.556474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:29:50.185571       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:29:50.235452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:29:50.365412       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:29:50.377977       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1121 14:29:50.379591       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:29:50.387072       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:29:51.073317       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:29:51.535164       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:29:51.549117       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:29:51.559253       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:29:56.775670       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:29:56.826496       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:56.831297       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:57.025433       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1121 14:30:22.848164       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:56266: use of closed network connection
	
	
	==> kube-controller-manager [bc0261d84f559991c2c7db2cb8fe481647263c9de84272911c0785f71feff57d] <==
	I1121 14:29:56.038702       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:29:56.045087       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:29:56.047407       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:29:56.047567       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:29:56.047671       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-921956"
	I1121 14:29:56.047720       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:29:56.072292       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:29:56.072337       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:29:56.072333       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:29:56.073044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:29:56.073217       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:29:56.073249       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:29:56.073646       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:29:56.073724       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:29:56.073748       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:29:56.073860       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:29:56.073758       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:29:56.074477       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:29:56.076345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:29:56.077593       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:29:56.080104       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:29:56.081343       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:29:56.083488       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:29:56.099986       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:30:16.051796       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3dad3f2e239b136aa9dce1235e9f83bbd957833abd6ad7034e20e8959c852a1c] <==
	I1121 14:29:57.504793       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:29:57.569093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:29:57.669420       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:29:57.669502       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1121 14:29:57.669659       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:29:57.692870       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:29:57.692927       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:29:57.698501       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:29:57.698871       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:29:57.699346       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:29:57.701848       1 config.go:309] "Starting node config controller"
	I1121 14:29:57.701907       1 config.go:200] "Starting service config controller"
	I1121 14:29:57.701909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:29:57.701939       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:29:57.701958       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:29:57.701963       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:29:57.701974       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:29:57.701978       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:29:57.803028       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:29:57.803065       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:29:57.803093       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:29:57.803108       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [dceea14c3e55c1a529a35c8e722b2d06d123c9b495c35eaff2b753a6f6697b67] <==
	E1121 14:29:48.618742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:29:48.618804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:29:48.618802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:29:48.618808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:29:48.618843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:29:48.618938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:29:48.619124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:29:48.619244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:29:49.438503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 14:29:49.464267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:29:49.527091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:29:49.549569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:29:49.584100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:29:49.616890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:29:49.641312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:29:49.697686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:29:49.718105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:29:49.727889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:29:49.781417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:29:49.781426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:29:49.853146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:29:49.890894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:29:49.911488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:29:49.981052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1121 14:29:51.214830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:29:52 no-preload-921956 kubelet[2141]: I1121 14:29:52.452891    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-921956" podStartSLOduration=3.452867833 podStartE2EDuration="3.452867833s" podCreationTimestamp="2025-11-21 14:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.44283189 +0000 UTC m=+1.141650376" watchObservedRunningTime="2025-11-21 14:29:52.452867833 +0000 UTC m=+1.151686315"
	Nov 21 14:29:52 no-preload-921956 kubelet[2141]: I1121 14:29:52.463635    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-921956" podStartSLOduration=1.463617617 podStartE2EDuration="1.463617617s" podCreationTimestamp="2025-11-21 14:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.463603925 +0000 UTC m=+1.162422411" watchObservedRunningTime="2025-11-21 14:29:52.463617617 +0000 UTC m=+1.162436106"
	Nov 21 14:29:52 no-preload-921956 kubelet[2141]: I1121 14:29:52.463750    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-921956" podStartSLOduration=1.4637427920000001 podStartE2EDuration="1.463742792s" podCreationTimestamp="2025-11-21 14:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.453267749 +0000 UTC m=+1.152086236" watchObservedRunningTime="2025-11-21 14:29:52.463742792 +0000 UTC m=+1.162561278"
	Nov 21 14:29:52 no-preload-921956 kubelet[2141]: I1121 14:29:52.485271    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-921956" podStartSLOduration=1.485248201 podStartE2EDuration="1.485248201s" podCreationTimestamp="2025-11-21 14:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.475134404 +0000 UTC m=+1.173952890" watchObservedRunningTime="2025-11-21 14:29:52.485248201 +0000 UTC m=+1.184066687"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.068280    2141 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.069161    2141 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816659    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d5a84f9-144c-4920-a08d-478587a56498-xtables-lock\") pod \"kube-proxy-wmx7z\" (UID: \"7d5a84f9-144c-4920-a08d-478587a56498\") " pod="kube-system/kube-proxy-wmx7z"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816708    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hckm\" (UniqueName: \"kubernetes.io/projected/7d5a84f9-144c-4920-a08d-478587a56498-kube-api-access-2hckm\") pod \"kube-proxy-wmx7z\" (UID: \"7d5a84f9-144c-4920-a08d-478587a56498\") " pod="kube-system/kube-proxy-wmx7z"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816738    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c698f297-3ff4-4f90-a871-5c4c944b9e61-cni-cfg\") pod \"kindnet-kf24h\" (UID: \"c698f297-3ff4-4f90-a871-5c4c944b9e61\") " pod="kube-system/kindnet-kf24h"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816760    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c698f297-3ff4-4f90-a871-5c4c944b9e61-lib-modules\") pod \"kindnet-kf24h\" (UID: \"c698f297-3ff4-4f90-a871-5c4c944b9e61\") " pod="kube-system/kindnet-kf24h"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816781    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjfw\" (UniqueName: \"kubernetes.io/projected/c698f297-3ff4-4f90-a871-5c4c944b9e61-kube-api-access-ljjfw\") pod \"kindnet-kf24h\" (UID: \"c698f297-3ff4-4f90-a871-5c4c944b9e61\") " pod="kube-system/kindnet-kf24h"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816843    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d5a84f9-144c-4920-a08d-478587a56498-kube-proxy\") pod \"kube-proxy-wmx7z\" (UID: \"7d5a84f9-144c-4920-a08d-478587a56498\") " pod="kube-system/kube-proxy-wmx7z"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816892    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d5a84f9-144c-4920-a08d-478587a56498-lib-modules\") pod \"kube-proxy-wmx7z\" (UID: \"7d5a84f9-144c-4920-a08d-478587a56498\") " pod="kube-system/kube-proxy-wmx7z"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816948    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c698f297-3ff4-4f90-a871-5c4c944b9e61-xtables-lock\") pod \"kindnet-kf24h\" (UID: \"c698f297-3ff4-4f90-a871-5c4c944b9e61\") " pod="kube-system/kindnet-kf24h"
	Nov 21 14:29:58 no-preload-921956 kubelet[2141]: I1121 14:29:58.461118    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wmx7z" podStartSLOduration=2.46109374 podStartE2EDuration="2.46109374s" podCreationTimestamp="2025-11-21 14:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:58.450619586 +0000 UTC m=+7.149438070" watchObservedRunningTime="2025-11-21 14:29:58.46109374 +0000 UTC m=+7.159912228"
	Nov 21 14:30:01 no-preload-921956 kubelet[2141]: I1121 14:30:01.491757    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kf24h" podStartSLOduration=2.548497999 podStartE2EDuration="5.491739095s" podCreationTimestamp="2025-11-21 14:29:56 +0000 UTC" firstStartedPulling="2025-11-21 14:29:57.562187822 +0000 UTC m=+6.261006301" lastFinishedPulling="2025-11-21 14:30:00.505428926 +0000 UTC m=+9.204247397" observedRunningTime="2025-11-21 14:30:01.48249203 +0000 UTC m=+10.181310521" watchObservedRunningTime="2025-11-21 14:30:01.491739095 +0000 UTC m=+10.190557581"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.123299    2141 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.217716    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4941c273-72bf-49af-ad72-793444a43d21-config-volume\") pod \"coredns-66bc5c9577-s4rzb\" (UID: \"4941c273-72bf-49af-ad72-793444a43d21\") " pod="kube-system/coredns-66bc5c9577-s4rzb"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.217767    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdnbd\" (UniqueName: \"kubernetes.io/projected/4941c273-72bf-49af-ad72-793444a43d21-kube-api-access-kdnbd\") pod \"coredns-66bc5c9577-s4rzb\" (UID: \"4941c273-72bf-49af-ad72-793444a43d21\") " pod="kube-system/coredns-66bc5c9577-s4rzb"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.217792    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgngm\" (UniqueName: \"kubernetes.io/projected/75fb9c04-833c-4511-83c7-380f4848e49d-kube-api-access-xgngm\") pod \"storage-provisioner\" (UID: \"75fb9c04-833c-4511-83c7-380f4848e49d\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.217813    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/75fb9c04-833c-4511-83c7-380f4848e49d-tmp\") pod \"storage-provisioner\" (UID: \"75fb9c04-833c-4511-83c7-380f4848e49d\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:12 no-preload-921956 kubelet[2141]: I1121 14:30:12.489077    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.489054503 podStartE2EDuration="15.489054503s" podCreationTimestamp="2025-11-21 14:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:12.488769927 +0000 UTC m=+21.187588414" watchObservedRunningTime="2025-11-21 14:30:12.489054503 +0000 UTC m=+21.187873004"
	Nov 21 14:30:14 no-preload-921956 kubelet[2141]: I1121 14:30:14.717866    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s4rzb" podStartSLOduration=17.717840588 podStartE2EDuration="17.717840588s" podCreationTimestamp="2025-11-21 14:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:12.499285739 +0000 UTC m=+21.198104225" watchObservedRunningTime="2025-11-21 14:30:14.717840588 +0000 UTC m=+23.416659075"
	Nov 21 14:30:14 no-preload-921956 kubelet[2141]: I1121 14:30:14.839225    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6t8s\" (UniqueName: \"kubernetes.io/projected/73c5bb38-ca7b-4848-93a8-0622f9c1292e-kube-api-access-z6t8s\") pod \"busybox\" (UID: \"73c5bb38-ca7b-4848-93a8-0622f9c1292e\") " pod="default/busybox"
	Nov 21 14:30:17 no-preload-921956 kubelet[2141]: I1121 14:30:17.506909    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.407150052 podStartE2EDuration="3.506888201s" podCreationTimestamp="2025-11-21 14:30:14 +0000 UTC" firstStartedPulling="2025-11-21 14:30:15.170205799 +0000 UTC m=+23.869024278" lastFinishedPulling="2025-11-21 14:30:17.269943947 +0000 UTC m=+25.968762427" observedRunningTime="2025-11-21 14:30:17.506500039 +0000 UTC m=+26.205318540" watchObservedRunningTime="2025-11-21 14:30:17.506888201 +0000 UTC m=+26.205706689"
	
	
	==> storage-provisioner [e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f] <==
	I1121 14:30:11.670154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:30:11.682926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:30:11.682992       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:30:11.688059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:11.694400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:30:11.694824       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:30:11.695142       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b2ba257-b216-4d68-8b76-44e8d620e754", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-921956_b4b04708-11b1-4a5e-aeb4-de08a1a4cf98 became leader
	I1121 14:30:11.695242       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-921956_b4b04708-11b1-4a5e-aeb4-de08a1a4cf98!
	W1121 14:30:11.698336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:11.702840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:30:11.796200       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-921956_b4b04708-11b1-4a5e-aeb4-de08a1a4cf98!
	W1121 14:30:13.706791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:13.710561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:15.713197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:15.717390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:17.721086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:17.726624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:19.730502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:19.736180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:21.740847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:21.747343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:23.751127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:23.755918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-921956 -n no-preload-921956
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-921956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-921956
helpers_test.go:243: (dbg) docker inspect no-preload-921956:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643",
	        "Created": "2025-11-21T14:29:20.340927235Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:29:20.385308254Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643/hosts",
	        "LogPath": "/var/lib/docker/containers/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643/2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643-json.log",
	        "Name": "/no-preload-921956",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-921956:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-921956",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2f8cf80dc5834c3b77b4b28a85091d9922ec41b06b0eb0d5f0a2b3af5854e643",
	                "LowerDir": "/var/lib/docker/overlay2/5405febd5abf836dbb465ba59f30da4381ba6c183a6e8927bdc55a96aceaaf63-init/diff:/var/lib/docker/overlay2/a649757dd9587fa5a20ca8a56ec1923099f2a5e912dc7e8e1dfa08e79248b59f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5405febd5abf836dbb465ba59f30da4381ba6c183a6e8927bdc55a96aceaaf63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5405febd5abf836dbb465ba59f30da4381ba6c183a6e8927bdc55a96aceaaf63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5405febd5abf836dbb465ba59f30da4381ba6c183a6e8927bdc55a96aceaaf63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-921956",
	                "Source": "/var/lib/docker/volumes/no-preload-921956/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-921956",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-921956",
	                "name.minikube.sigs.k8s.io": "no-preload-921956",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "202c56918b451e57ac6a6940b6773054760fbb30c422daf31ff01b1753b6ebd3",
	            "SandboxKey": "/var/run/docker/netns/202c56918b45",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-921956": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6269051e29ec1521c06cedb27527bf727867cfc36d1dc7699629b8110ce83ce3",
	                    "EndpointID": "9d6544cccf2a2df07942c882c6a2c4ef55c6ecebe3af4be8d2e234f681a411b9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fa:b0:e0:f4:ee:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-921956",
	                        "2f8cf80dc583"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-921956 -n no-preload-921956
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-921956 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-921956 logs -n 25: (1.192083762s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-459127 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cert-expiration-371956                                                                                                                                                                                                                           │ cert-expiration-371956       │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ -p cilium-459127 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ ssh     │ -p cilium-459127 sudo crio config                                                                                                                                                                                                                   │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ delete  │ -p cilium-459127                                                                                                                                                                                                                                    │ cilium-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ start   │ -p cert-options-733993 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p force-systemd-flag-730471 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p NoKubernetes-187733 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p NoKubernetes-187733 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │                     │
	│ delete  │ -p NoKubernetes-187733                                                                                                                                                                                                                              │ NoKubernetes-187733          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ cert-options-733993 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ ssh     │ -p cert-options-733993 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p cert-options-733993                                                                                                                                                                                                                              │ cert-options-733993          │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ ssh     │ force-systemd-flag-730471 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ delete  │ -p force-systemd-flag-730471                                                                                                                                                                                                                        │ force-systemd-flag-730471    │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:29 UTC │ 21 Nov 25 14:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-012258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:30 UTC │ 21 Nov 25 14:30 UTC │
	│ stop    │ -p old-k8s-version-012258 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:30 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-376255 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:30 UTC │ 21 Nov 25 14:30 UTC │
	│ stop    │ -p default-k8s-diff-port-376255 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:29:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:29:24.877938  255774 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:29:24.878133  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.878179  255774 out.go:374] Setting ErrFile to fd 2...
	I1121 14:29:24.878200  255774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:29:24.879901  255774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:29:24.881344  255774 out.go:368] Setting JSON to false
	I1121 14:29:24.883254  255774 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4307,"bootTime":1763731058,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:29:24.883372  255774 start.go:143] virtualization: kvm guest
	I1121 14:29:24.885483  255774 out.go:179] * [default-k8s-diff-port-376255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:29:24.887201  255774 notify.go:221] Checking for updates...
	I1121 14:29:24.887242  255774 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:29:24.890729  255774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:29:24.892963  255774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:24.894677  255774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:29:24.897870  255774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:29:24.899765  255774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:29:24.902854  255774 config.go:182] Loaded profile config "kubernetes-upgrade-797080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903030  255774 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:24.903162  255774 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:24.903312  255774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:29:24.939143  255774 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:29:24.939248  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.025144  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.01035373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.025295  255774 docker.go:319] overlay module found
	I1121 14:29:25.027378  255774 out.go:179] * Using the docker driver based on user configuration
	I1121 14:29:22.611340  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.611365  249617 ubuntu.go:182] provisioning hostname "old-k8s-version-012258"
	I1121 14:29:22.611426  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.635589  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.635869  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.635891  249617 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-012258 && echo "old-k8s-version-012258" | sudo tee /etc/hostname
	I1121 14:29:22.796661  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-012258
	
	I1121 14:29:22.796754  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:22.822578  249617 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:22.822834  249617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1121 14:29:22.822860  249617 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-012258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-012258/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-012258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:22.970644  249617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:22.970676  249617 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:22.970732  249617 ubuntu.go:190] setting up certificates
	I1121 14:29:22.970743  249617 provision.go:84] configureAuth start
	I1121 14:29:22.970826  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:22.991118  249617 provision.go:143] copyHostCerts
	I1121 14:29:22.991183  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:22.991193  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:22.991250  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:22.991367  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:22.991381  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:22.991414  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:22.991488  249617 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:22.991499  249617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:22.991526  249617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:22.991627  249617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-012258 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-012258]
	I1121 14:29:23.140756  249617 provision.go:177] copyRemoteCerts
	I1121 14:29:23.140833  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:23.140885  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.161751  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.269718  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:23.292619  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:29:23.314336  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:29:23.337086  249617 provision.go:87] duration metric: took 366.309314ms to configureAuth
	I1121 14:29:23.337129  249617 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:23.337306  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:23.337320  249617 machine.go:97] duration metric: took 3.89496072s to provisionDockerMachine
	I1121 14:29:23.337326  249617 client.go:176] duration metric: took 11.527957207s to LocalClient.Create
	I1121 14:29:23.337344  249617 start.go:167] duration metric: took 11.528071392s to libmachine.API.Create "old-k8s-version-012258"
	I1121 14:29:23.337352  249617 start.go:293] postStartSetup for "old-k8s-version-012258" (driver="docker")
	I1121 14:29:23.337365  249617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:23.337422  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:23.337471  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.359217  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.466089  249617 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:23.470146  249617 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:23.470174  249617 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:23.470185  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:23.470249  249617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:23.470349  249617 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:23.470480  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:23.479086  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:23.506776  249617 start.go:296] duration metric: took 169.402964ms for postStartSetup
	I1121 14:29:23.507166  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.527044  249617 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/config.json ...
	I1121 14:29:23.527374  249617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:23.527425  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.546669  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.645314  249617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:23.650498  249617 start.go:128] duration metric: took 11.844529266s to createHost
	I1121 14:29:23.650523  249617 start.go:83] releasing machines lock for "old-k8s-version-012258", held for 11.844683904s
	I1121 14:29:23.650592  249617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-012258
	I1121 14:29:23.671161  249617 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:23.671227  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.671321  249617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:23.671403  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:23.694189  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.694196  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:23.856609  249617 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:23.863273  249617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:23.867917  249617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:23.867991  249617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:23.895679  249617 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:23.895707  249617 start.go:496] detecting cgroup driver to use...
	I1121 14:29:23.895742  249617 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:23.895805  249617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:23.911897  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:23.925350  249617 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:23.925400  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:23.943424  249617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:23.962675  249617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:24.059689  249617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:24.169263  249617 docker.go:234] disabling docker service ...
	I1121 14:29:24.169325  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:24.191949  249617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:24.206181  249617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:24.319402  249617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:24.455060  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:24.472888  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:24.497138  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1121 14:29:24.524424  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:24.536491  249617 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:24.536702  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:24.547193  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.559919  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:24.571627  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:24.581977  249617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:24.629839  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:24.640310  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:24.650595  249617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:24.660801  249617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:24.669493  249617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:24.677810  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:24.781513  249617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:24.929576  249617 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:24.929707  249617 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:24.936782  249617 start.go:564] Will wait 60s for crictl version
	I1121 14:29:24.936893  249617 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.942453  249617 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:24.986447  249617 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:24.986527  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.018021  249617 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:25.051308  249617 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1121 14:29:25.029036  255774 start.go:309] selected driver: docker
	I1121 14:29:25.029056  255774 start.go:930] validating driver "docker" against <nil>
	I1121 14:29:25.029071  255774 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:29:25.029977  255774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:29:25.123370  255774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-21 14:29:25.11156096 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:29:25.123696  255774 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:29:25.124078  255774 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:29:25.125758  255774 out.go:179] * Using Docker driver with root privileges
	I1121 14:29:25.127166  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.127249  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.127262  255774 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:29:25.127353  255774 start.go:353] cluster config:
	{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:25.129454  255774 out.go:179] * Starting "default-k8s-diff-port-376255" primary control-plane node in "default-k8s-diff-port-376255" cluster
	I1121 14:29:25.130961  255774 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:29:25.132637  255774 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:29:25.134190  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:25.134237  255774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 14:29:25.134251  255774 cache.go:65] Caching tarball of preloaded images
	I1121 14:29:25.134262  255774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:29:25.134379  255774 preload.go:238] Found /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1121 14:29:25.134391  255774 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:29:25.134520  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:25.134560  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json: {Name:mk1db0ba6952ac549a7eae06783e73916a7ad392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.161339  255774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:29:25.161363  255774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:29:25.161384  255774 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:29:25.161419  255774 start.go:360] acquireMachinesLock for default-k8s-diff-port-376255: {Name:mka18b3ecaec4bae205bc7951f90400738bef300 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:29:25.161518  255774 start.go:364] duration metric: took 79.824µs to acquireMachinesLock for "default-k8s-diff-port-376255"
	I1121 14:29:25.161561  255774 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:25.161653  255774 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:29:25.055066  249617 cli_runner.go:164] Run: docker network inspect old-k8s-version-012258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.085953  249617 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:25.093859  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.111432  249617 kubeadm.go:884] updating cluster {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:25.111671  249617 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:29:25.111753  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.143860  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.143888  249617 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:25.143953  249617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:25.174770  249617 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:25.174789  249617 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:25.174797  249617 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1121 14:29:25.174897  249617 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-012258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:25.174970  249617 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:25.211311  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:25.211341  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:25.211371  249617 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:25.211401  249617 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-012258 NodeName:old-k8s-version-012258 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:25.211596  249617 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-012258"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:25.211673  249617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:29:25.224124  249617 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:25.224202  249617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:25.235430  249617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1121 14:29:25.254181  249617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:25.283842  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1121 14:29:25.302971  249617 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:25.309092  249617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:25.325170  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:25.438037  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:25.469767  249617 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258 for IP: 192.168.94.2
	I1121 14:29:25.469790  249617 certs.go:195] generating shared ca certs ...
	I1121 14:29:25.469811  249617 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.470023  249617 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:25.470095  249617 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:25.470105  249617 certs.go:257] generating profile certs ...
	I1121 14:29:25.470177  249617 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key
	I1121 14:29:25.470199  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt with IP's: []
	I1121 14:29:25.634340  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt ...
	I1121 14:29:25.634374  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt: {Name:mk5e1a3132436dad740351857d527e3c45fff4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648586  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key ...
	I1121 14:29:25.648625  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.key: {Name:mk757010d91a13b26eb1340def496546bee9bf26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.648791  249617 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc
	I1121 14:29:25.648816  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1121 14:29:25.817862  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc ...
	I1121 14:29:25.817892  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc: {Name:mk8a482343e99af6e8bdd7e52a6e5b813685beb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818099  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc ...
	I1121 14:29:25.818121  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc: {Name:mk4cf761e884b2a77e105e39ad6b0495b59b5aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:25.818237  249617 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt
	I1121 14:29:25.818331  249617 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key.a13049cc -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key
	I1121 14:29:25.818390  249617 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key
	I1121 14:29:25.818406  249617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt with IP's: []
	I1121 14:29:26.390351  249617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt ...
	I1121 14:29:26.390391  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt: {Name:mk37207f300780275f6aa5331fc436d60739196c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390599  249617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key ...
	I1121 14:29:26.390617  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key: {Name:mkff5d416178c38a50235608b783c3957bee8456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:26.390849  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:26.390898  249617 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:26.390913  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:26.390946  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:26.390988  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:26.391029  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:26.391086  249617 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:26.391817  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:26.418450  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:26.446063  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:26.469197  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:26.493823  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:29:26.526847  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:26.555176  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:25.915600  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:25.916118  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:25.916177  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:25.916228  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:25.948057  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:25.948080  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:25.948087  213058 cri.go:89] found id: ""
	I1121 14:29:25.948096  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:25.948160  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.952634  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.956801  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:25.956870  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:25.990988  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:25.991014  213058 cri.go:89] found id: ""
	I1121 14:29:25.991024  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:25.991083  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:25.995665  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:25.995736  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:26.031577  213058 cri.go:89] found id: ""
	I1121 14:29:26.031604  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.031612  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:26.031618  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:26.031665  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:26.064880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.064907  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.064912  213058 cri.go:89] found id: ""
	I1121 14:29:26.064922  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:26.064979  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.070274  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.075659  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:26.075731  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:26.108079  213058 cri.go:89] found id: ""
	I1121 14:29:26.108108  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.108118  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:26.108125  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:26.108181  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:26.138988  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:26.139018  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.139024  213058 cri.go:89] found id: ""
	I1121 14:29:26.139034  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:26.139096  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.143487  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.147564  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:26.147631  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:26.185747  213058 cri.go:89] found id: ""
	I1121 14:29:26.185774  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.185785  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:26.185793  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:26.185848  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:26.220265  213058 cri.go:89] found id: ""
	I1121 14:29:26.220296  213058 logs.go:282] 0 containers: []
	W1121 14:29:26.220308  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:26.220321  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:26.220335  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:26.265042  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:26.265072  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:26.402636  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:26.402672  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:26.484531  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:26.484565  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:26.484581  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:26.534239  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:26.534294  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:26.579971  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:26.580016  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:26.643693  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:26.643727  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:26.683712  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:26.683748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:26.702800  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:26.702836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:26.741813  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:26.741845  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:26.812944  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:26.812997  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:26.855307  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:26.855347  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:24.308535  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1121 14:29:24.308619  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.317176  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1121 14:29:24.317245  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.318774  252125 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1121 14:29:24.318825  252125 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.318867  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.328208  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1121 14:29:24.328249  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1121 14:29:24.328291  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.328305  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.328664  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1121 14:29:24.328708  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1121 14:29:24.335839  252125 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1121 14:29:24.335900  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.337631  252125 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1121 14:29:24.337672  252125 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.337713  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.346363  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.346443  252125 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1121 14:29:24.346484  252125 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.346517  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361284  252125 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1121 14:29:24.361331  252125 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.361375  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.361424  252125 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1121 14:29:24.361445  252125 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.361477  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.366787  252125 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1121 14:29:24.366831  252125 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:29:24.366871  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379457  252125 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1121 14:29:24.379503  252125 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.379558  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:24.379677  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.388569  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.388608  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.388658  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.388681  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.388574  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.418705  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.418763  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.427350  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:29:24.434639  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.434777  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.437430  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.437452  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.477986  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.478027  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:29:24.478099  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:29:24.478334  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:24.478136  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:29:24.485019  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:29:24.485026  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:29:24.489362  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:29:24.521124  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:29:24.521651  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:29:24.521767  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:24.553384  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:29:24.553425  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1121 14:29:24.553522  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:29:24.553632  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:24.553699  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1121 14:29:24.553755  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.553769  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:29:24.553803  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:29:24.553853  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:24.553860  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:24.553893  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:29:24.553920  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:29:24.553945  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:24.553945  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1121 14:29:24.565027  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:29:24.565077  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1121 14:29:24.565153  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:29:24.565169  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1121 14:29:24.574297  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:29:24.574338  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1121 14:29:24.574363  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:29:24.574390  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:29:24.574393  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1121 14:29:24.574407  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1121 14:29:24.784169  252125 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.784246  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1121 14:29:24.964305  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:29:25.029557  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.029626  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:29:25.445459  252125 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1121 14:29:25.445578  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691152  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.661495413s)
	I1121 14:29:26.691188  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:29:26.691209  252125 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691206  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.245604103s)
	I1121 14:29:26.691250  252125 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1121 14:29:26.691264  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:29:26.691297  252125 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:26.691347  252125 ssh_runner.go:195] Run: which crictl
	I1121 14:29:26.696141  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.404441617s)
	I1121 14:29:28.100696  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:28.100615  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.409327822s)
	I1121 14:29:28.100767  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:29:28.100803  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.100853  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:29:28.132780  252125 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:25.163849  255774 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:29:25.164318  255774 start.go:159] libmachine.API.Create for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:25.164395  255774 client.go:173] LocalClient.Create starting
	I1121 14:29:25.164513  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem
	I1121 14:29:25.164575  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164605  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.164704  255774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem
	I1121 14:29:25.164760  255774 main.go:143] libmachine: Decoding PEM data...
	I1121 14:29:25.164776  255774 main.go:143] libmachine: Parsing certificate...
	I1121 14:29:25.165330  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:29:25.188513  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:29:25.188614  255774 network_create.go:284] running [docker network inspect default-k8s-diff-port-376255] to gather additional debugging logs...
	I1121 14:29:25.188640  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255
	W1121 14:29:25.213297  255774 cli_runner.go:211] docker network inspect default-k8s-diff-port-376255 returned with exit code 1
	I1121 14:29:25.213338  255774 network_create.go:287] error running [docker network inspect default-k8s-diff-port-376255]: docker network inspect default-k8s-diff-port-376255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-376255 not found
	I1121 14:29:25.213435  255774 network_create.go:289] output of [docker network inspect default-k8s-diff-port-376255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-376255 not found
	
	** /stderr **
	I1121 14:29:25.213589  255774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:25.240844  255774 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
	I1121 14:29:25.241874  255774 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39921db0d513 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:e4:85:98:a5:e3} reservation:<nil>}
	I1121 14:29:25.242975  255774 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-36a8741c90a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:21:99:72:63:4a} reservation:<nil>}
	I1121 14:29:25.244042  255774 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-63d543fc8bbd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:58:40:d2:33:c4} reservation:<nil>}
	I1121 14:29:25.245269  255774 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb46e0}
	I1121 14:29:25.245303  255774 network_create.go:124] attempt to create docker network default-k8s-diff-port-376255 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:29:25.245384  255774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 default-k8s-diff-port-376255
	I1121 14:29:25.322210  255774 network_create.go:108] docker network default-k8s-diff-port-376255 192.168.85.0/24 created
	I1121 14:29:25.322244  255774 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-376255" container
	I1121 14:29:25.322309  255774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:29:25.346732  255774 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-376255 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:29:25.374919  255774 oci.go:103] Successfully created a docker volume default-k8s-diff-port-376255
	I1121 14:29:25.374994  255774 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-376255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --entrypoint /usr/bin/test -v default-k8s-diff-port-376255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:29:26.343288  255774 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-376255
	I1121 14:29:26.343370  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:26.343387  255774 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:29:26.343457  255774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:29:26.582319  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:26.606403  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:26.635408  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:26.661287  249617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:26.686582  249617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:26.703157  249617 ssh_runner.go:195] Run: openssl version
	I1121 14:29:26.712353  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:26.725593  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732381  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.732523  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:26.774823  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:26.785127  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:26.796035  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800685  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.800751  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:26.842185  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:26.852632  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:26.863838  249617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869571  249617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.869642  249617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:26.922017  249617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:26.934065  249617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:26.939457  249617 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:26.939526  249617 kubeadm.go:401] StartCluster: {Name:old-k8s-version-012258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-012258 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:26.939648  249617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:26.939710  249617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:26.978114  249617 cri.go:89] found id: ""
	I1121 14:29:26.978192  249617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:26.989363  249617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:27.000529  249617 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:27.000603  249617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:27.012158  249617 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:27.012179  249617 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:27.012231  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:27.022084  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:27.022141  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:27.034139  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:27.044897  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:27.045038  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:27.056593  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.066532  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:27.066615  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:27.077925  249617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:27.088254  249617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:27.088320  249617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:27.098442  249617 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:27.205509  249617 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:27.290009  249617 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:29.388121  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:29.388594  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:29.388645  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:29.388690  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:29.416964  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:29.416991  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.416996  213058 cri.go:89] found id: ""
	I1121 14:29:29.417006  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:29.417074  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.421476  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.425483  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:29.425557  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:29.453687  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:29.453708  213058 cri.go:89] found id: ""
	I1121 14:29:29.453718  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:29.453783  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.458267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:29.458353  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:29.485804  213058 cri.go:89] found id: ""
	I1121 14:29:29.485865  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.485876  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:29.485883  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:29.485940  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:29.514265  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:29.514290  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.514294  213058 cri.go:89] found id: ""
	I1121 14:29:29.514302  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:29.514349  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.518626  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.522446  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:29.522501  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:29.549770  213058 cri.go:89] found id: ""
	I1121 14:29:29.549799  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.549811  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:29.549819  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:29.549868  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:29.577193  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.577217  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.577222  213058 cri.go:89] found id: ""
	I1121 14:29:29.577230  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:29.577288  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.581256  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:29.585291  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:29.585347  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:29.614632  213058 cri.go:89] found id: ""
	I1121 14:29:29.614664  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.614674  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:29.614682  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:29.614740  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:29.645697  213058 cri.go:89] found id: ""
	I1121 14:29:29.645721  213058 logs.go:282] 0 containers: []
	W1121 14:29:29.645730  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:29.645741  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:29.645756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:29.675578  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:29.675607  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:29.718952  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:29.718990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:29.750089  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:29.750117  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:29.858708  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:29.858738  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:29.902976  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:29.903013  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:29.938083  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:29.938118  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:29.976329  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:29.976366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:29.991448  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:29.991485  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:30.053990  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:30.054015  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:30.054032  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:30.089042  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:30.089076  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:30.124498  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:30.124528  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.685601  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:32.686035  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:32.686089  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:32.686144  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:32.744948  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:32.745095  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:32.745132  213058 cri.go:89] found id: ""
	I1121 14:29:32.745169  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:32.745355  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.752020  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.760837  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:32.761106  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:32.807418  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:32.807451  213058 cri.go:89] found id: ""
	I1121 14:29:32.807462  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:32.807521  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.813216  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:32.813289  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:32.852598  213058 cri.go:89] found id: ""
	I1121 14:29:32.852633  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.852645  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:32.852653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:32.852711  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:32.889120  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:32.889144  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:32.889148  213058 cri.go:89] found id: ""
	I1121 14:29:32.889157  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:32.889211  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.894834  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.900572  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:32.900646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:32.937810  213058 cri.go:89] found id: ""
	I1121 14:29:32.937836  213058 logs.go:282] 0 containers: []
	W1121 14:29:32.937846  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:32.937853  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:32.937914  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:32.975713  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:32.975735  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:32.975741  213058 cri.go:89] found id: ""
	I1121 14:29:32.975751  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:32.975815  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.981574  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:32.985965  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:32.986030  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:33.019894  213058 cri.go:89] found id: ""
	I1121 14:29:33.019923  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.019935  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:33.019949  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:33.020009  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:33.051872  213058 cri.go:89] found id: ""
	I1121 14:29:33.051901  213058 logs.go:282] 0 containers: []
	W1121 14:29:33.051911  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:33.051923  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:33.051937  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:33.103114  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:33.103153  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:33.142816  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:33.142846  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:33.209677  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:33.209736  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:33.255185  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:33.255220  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:33.272562  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:33.272600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:33.319098  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:33.319132  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:33.366245  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:33.366286  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:33.410624  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:33.410660  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:33.458217  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:33.458253  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:33.586879  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:33.586919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:29.835800  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.734910291s)
	I1121 14:29:29.835838  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:29:29.835860  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835902  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:29:29.835802  252125 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.702989246s)
	I1121 14:29:29.835965  252125 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:29:29.836056  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:29.840842  252125 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:29:29.840873  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1121 14:29:32.866902  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (3.030968163s)
	I1121 14:29:32.866941  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:29:32.866961  252125 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:32.867002  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:29:31.901829  255774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-376255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.558304176s)
	I1121 14:29:31.901864  255774 kic.go:203] duration metric: took 5.558473353s to extract preloaded images to volume ...
	W1121 14:29:31.901941  255774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:29:31.901969  255774 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:29:31.902010  255774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:29:31.985847  255774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-376255 --name default-k8s-diff-port-376255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-376255 --network default-k8s-diff-port-376255 --ip 192.168.85.2 --volume default-k8s-diff-port-376255:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:29:32.403824  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Running}}
	I1121 14:29:32.427802  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.456228  255774 cli_runner.go:164] Run: docker exec default-k8s-diff-port-376255 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:29:32.514766  255774 oci.go:144] the created container "default-k8s-diff-port-376255" has a running status.
	I1121 14:29:32.514799  255774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa...
	I1121 14:29:32.829505  255774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:29:32.861911  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.888316  255774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:29:32.888342  255774 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-376255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:29:32.948121  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:32.975355  255774 machine.go:94] provisionDockerMachine start ...
	I1121 14:29:32.975799  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:33.002463  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:33.002813  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:33.002834  255774 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:29:33.003677  255774 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37682->127.0.0.1:33070: read: connection reset by peer
	I1121 14:29:37.228254  249617 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:29:37.228434  249617 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:37.228644  249617 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:37.228822  249617 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:37.228907  249617 kubeadm.go:319] OS: Linux
	I1121 14:29:37.228971  249617 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:37.229029  249617 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:37.229111  249617 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:37.229198  249617 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:37.229264  249617 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:37.229333  249617 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:37.229403  249617 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:37.229468  249617 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:37.229624  249617 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:37.229762  249617 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:37.229892  249617 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:29:37.230051  249617 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.235113  249617 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:37.235306  249617 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:37.235508  249617 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:37.235691  249617 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:37.235858  249617 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:37.236102  249617 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:37.236205  249617 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:37.236303  249617 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:37.236516  249617 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236607  249617 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:37.236765  249617 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-012258] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:29:37.236861  249617 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:37.236954  249617 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:37.237021  249617 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:37.237104  249617 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:37.237178  249617 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:37.237257  249617 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:37.237352  249617 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:37.237438  249617 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:37.237554  249617 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:37.237649  249617 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:37.239227  249617 out.go:252]   - Booting up control plane ...
	I1121 14:29:37.239369  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:37.239534  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:37.239682  249617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:37.239829  249617 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:37.239965  249617 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:37.240022  249617 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:37.240260  249617 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:29:37.240373  249617 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.503152 seconds
	I1121 14:29:37.240759  249617 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:37.240933  249617 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:37.241035  249617 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:37.241286  249617 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-012258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:37.241409  249617 kubeadm.go:319] [bootstrap-token] Using token: yix385.n0xejrlt7sdx1ngs
	I1121 14:29:37.243198  249617 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:37.243379  249617 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:37.243497  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:37.243755  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:37.243946  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:37.244147  249617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:37.244287  249617 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:37.244477  249617 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:37.244564  249617 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:37.244632  249617 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:37.244642  249617 kubeadm.go:319] 
	I1121 14:29:37.244725  249617 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:37.244736  249617 kubeadm.go:319] 
	I1121 14:29:37.244834  249617 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:37.244845  249617 kubeadm.go:319] 
	I1121 14:29:37.244877  249617 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:37.244966  249617 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:37.245033  249617 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:37.245045  249617 kubeadm.go:319] 
	I1121 14:29:37.245111  249617 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:37.245120  249617 kubeadm.go:319] 
	I1121 14:29:37.245178  249617 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:37.245192  249617 kubeadm.go:319] 
	I1121 14:29:37.245274  249617 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:37.245371  249617 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:37.245468  249617 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:37.245476  249617 kubeadm.go:319] 
	I1121 14:29:37.245604  249617 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:37.245734  249617 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:37.245755  249617 kubeadm.go:319] 
	I1121 14:29:37.245866  249617 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246024  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:37.246062  249617 kubeadm.go:319] 	--control-plane 
	I1121 14:29:37.246072  249617 kubeadm.go:319] 
	I1121 14:29:37.246178  249617 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:37.246189  249617 kubeadm.go:319] 
	I1121 14:29:37.246294  249617 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yix385.n0xejrlt7sdx1ngs \
	I1121 14:29:37.246443  249617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:37.246454  249617 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.246462  249617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.248274  249617 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:36.147516  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.147569  255774 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-376255"
	I1121 14:29:36.147633  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.169609  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.169898  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.169928  255774 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376255 && echo "default-k8s-diff-port-376255" | sudo tee /etc/hostname
	I1121 14:29:36.328958  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376255
	
	I1121 14:29:36.329040  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.353105  255774 main.go:143] libmachine: Using SSH client type: native
	I1121 14:29:36.353414  255774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1121 14:29:36.353448  255774 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376255/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:29:36.504067  255774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:29:36.504097  255774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:29:36.504119  255774 ubuntu.go:190] setting up certificates
	I1121 14:29:36.504133  255774 provision.go:84] configureAuth start
	I1121 14:29:36.504206  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:36.528674  255774 provision.go:143] copyHostCerts
	I1121 14:29:36.528752  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:29:36.528762  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:29:36.528840  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:29:36.528968  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:29:36.528997  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:29:36.529043  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:29:36.529141  255774 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:29:36.529152  255774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:29:36.529188  255774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:29:36.529281  255774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376255 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-376255 localhost minikube]
	I1121 14:29:36.617208  255774 provision.go:177] copyRemoteCerts
	I1121 14:29:36.617283  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:29:36.617345  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.639948  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.749486  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:29:36.777360  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1121 14:29:36.804875  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:29:36.830920  255774 provision.go:87] duration metric: took 326.762892ms to configureAuth
	I1121 14:29:36.830953  255774 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:29:36.831165  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:36.831181  255774 machine.go:97] duration metric: took 3.855604158s to provisionDockerMachine
	I1121 14:29:36.831191  255774 client.go:176] duration metric: took 11.666782197s to LocalClient.Create
	I1121 14:29:36.831216  255774 start.go:167] duration metric: took 11.666902979s to libmachine.API.Create "default-k8s-diff-port-376255"
	I1121 14:29:36.831234  255774 start.go:293] postStartSetup for "default-k8s-diff-port-376255" (driver="docker")
	I1121 14:29:36.831254  255774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:29:36.831311  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:29:36.831360  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:36.855811  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:36.969760  255774 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:29:36.974452  255774 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:29:36.974529  255774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:29:36.974577  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:29:36.974658  255774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:29:36.974771  255774 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:29:36.974903  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:29:36.984975  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:37.017462  255774 start.go:296] duration metric: took 186.210262ms for postStartSetup
	I1121 14:29:37.017947  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.041309  255774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/config.json ...
	I1121 14:29:37.041659  255774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:29:37.041731  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.070697  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.177189  255774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:29:37.185711  255774 start.go:128] duration metric: took 12.024042461s to createHost
	I1121 14:29:37.185741  255774 start.go:83] releasing machines lock for "default-k8s-diff-port-376255", held for 12.024206528s
	I1121 14:29:37.185820  255774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-376255
	I1121 14:29:37.211853  255774 ssh_runner.go:195] Run: cat /version.json
	I1121 14:29:37.211903  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.211965  255774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:29:37.212033  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:37.238575  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.242252  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:37.421321  255774 ssh_runner.go:195] Run: systemctl --version
	I1121 14:29:37.431728  255774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:29:37.437939  255774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:29:37.438053  255774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:29:37.469409  255774 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:29:37.469437  255774 start.go:496] detecting cgroup driver to use...
	I1121 14:29:37.469471  255774 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:29:37.469521  255774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:29:37.490669  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:29:37.507754  255774 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:29:37.507821  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:29:37.525644  255774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:29:37.545289  255774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:29:37.674060  255774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:29:37.795128  255774 docker.go:234] disabling docker service ...
	I1121 14:29:37.795198  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:29:37.819043  255774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:29:37.834819  255774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:29:37.960408  255774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:29:38.072269  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:29:38.089314  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:29:38.105248  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:29:38.117445  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:29:38.128509  255774 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:29:38.128607  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:29:38.139526  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.150896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:29:38.161459  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:29:38.173179  255774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:29:38.183645  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:29:38.194923  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:29:38.207896  255774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:29:38.220346  255774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:29:38.230823  255774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:29:38.241807  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.339708  255774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:29:38.460319  255774 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:29:38.460387  255774 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:29:38.465812  255774 start.go:564] Will wait 60s for crictl version
	I1121 14:29:38.465875  255774 ssh_runner.go:195] Run: which crictl
	I1121 14:29:38.470166  255774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:29:38.507773  255774 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:29:38.507860  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.532247  255774 ssh_runner.go:195] Run: containerd --version
	I1121 14:29:38.559098  255774 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1121 14:29:33.655577  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:33.655599  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:33.655612  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.225853  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:36.226247  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:36.226304  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:36.226364  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:36.259583  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:36.259613  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.259619  213058 cri.go:89] found id: ""
	I1121 14:29:36.259628  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:36.259690  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.264798  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.269597  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:36.269663  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:36.304312  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:36.304335  213058 cri.go:89] found id: ""
	I1121 14:29:36.304346  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:36.304403  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.309760  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:36.309833  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:36.342617  213058 cri.go:89] found id: ""
	I1121 14:29:36.342643  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.342653  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:36.342660  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:36.342722  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:36.378880  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.378909  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:36.378914  213058 cri.go:89] found id: ""
	I1121 14:29:36.378924  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:36.378996  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.384032  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.388866  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:36.388932  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:36.427253  213058 cri.go:89] found id: ""
	I1121 14:29:36.427282  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.427293  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:36.427300  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:36.427355  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:36.461581  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:36.461604  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:36.461609  213058 cri.go:89] found id: ""
	I1121 14:29:36.461618  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:36.461677  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.466623  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:36.471422  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:36.471490  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:36.503502  213058 cri.go:89] found id: ""
	I1121 14:29:36.503533  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.503566  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:36.503575  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:36.503633  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:36.538350  213058 cri.go:89] found id: ""
	I1121 14:29:36.538379  213058 logs.go:282] 0 containers: []
	W1121 14:29:36.538390  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:36.538404  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:36.538419  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:36.666987  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:36.667025  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:36.685628  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:36.685659  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:36.763464  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:36.763491  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:36.763508  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:36.808789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:36.808832  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:36.887558  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:36.887596  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:36.952391  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:36.952434  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:36.993139  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:36.993167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:37.037499  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:37.037552  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:37.084237  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:37.084270  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:37.132236  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:37.132272  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:37.172720  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:37.172753  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:34.341753  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.474720913s)
	I1121 14:29:34.341781  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:29:34.341812  252125 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:34.341855  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:29:37.308520  252125 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.966633628s)
	I1121 14:29:37.308585  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:29:37.308616  252125 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.308666  252125 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:29:37.772300  252125 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:29:37.772349  252125 cache_images.go:125] Successfully loaded all cached images
	I1121 14:29:37.772358  252125 cache_images.go:94] duration metric: took 13.627858156s to LoadCachedImages
	I1121 14:29:37.772375  252125 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1121 14:29:37.772522  252125 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-921956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:37.772622  252125 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:37.802988  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:37.803017  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:37.803041  252125 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:37.803067  252125 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-921956 NodeName:no-preload-921956 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:37.803212  252125 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-921956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:37.803298  252125 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.814189  252125 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:29:37.814255  252125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:37.824124  252125 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1121 14:29:37.824214  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1121 14:29:37.824231  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:29:37.824217  252125 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1121 14:29:37.829417  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:29:37.829466  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1121 14:29:38.860713  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:29:38.875498  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:29:38.880447  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:29:38.880477  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1121 14:29:39.014274  252125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:29:39.021151  252125 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:29:39.021187  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1121 14:29:39.234010  252125 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:39.244382  252125 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1121 14:29:39.259897  252125 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:39.279143  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1121 14:29:38.560688  255774 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-376255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:29:38.580956  255774 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:29:38.585728  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.599140  255774 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:29:38.599295  255774 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:29:38.599391  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.631637  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.631660  255774 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:29:38.631720  255774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:29:38.665498  255774 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:29:38.665522  255774 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:29:38.665530  255774 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1121 14:29:38.665659  255774 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:29:38.665752  255774 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:29:38.694106  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:38.694138  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:38.694156  255774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:29:38.694182  255774 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376255 NodeName:default-k8s-diff-port-376255 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:29:38.694318  255774 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-376255"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:29:38.694377  255774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:29:38.704016  255774 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:29:38.704074  255774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:29:38.712471  255774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1121 14:29:38.726311  255774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:29:38.743589  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1121 14:29:38.759275  255774 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:38.763723  255774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:38.775814  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:38.870850  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:38.898876  255774 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255 for IP: 192.168.85.2
	I1121 14:29:38.898898  255774 certs.go:195] generating shared ca certs ...
	I1121 14:29:38.898917  255774 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:38.899068  255774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:38.899116  255774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:38.899130  255774 certs.go:257] generating profile certs ...
	I1121 14:29:38.899196  255774 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key
	I1121 14:29:38.899223  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt with IP's: []
	I1121 14:29:39.101636  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt ...
	I1121 14:29:39.101669  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: {Name:mk48f410a390b01d5b10a9357a2648374ae8306b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.101873  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key ...
	I1121 14:29:39.101885  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.key: {Name:mkb89c45215e08640f5b5fa9a6de6863ea0983e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.102008  255774 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066
	I1121 14:29:39.102024  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:29:39.438352  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 ...
	I1121 14:29:39.438387  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066: {Name:mkc5f7dc938a9541dec0c2accd850515b39a25d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438574  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 ...
	I1121 14:29:39.438586  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066: {Name:mka67f2d91e35acd02a0ed4174188db6877ef796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.438666  255774 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt
	I1121 14:29:39.438744  255774 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key.3377c066 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key
	I1121 14:29:39.438811  255774 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key
	I1121 14:29:39.438826  255774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt with IP's: []
	I1121 14:29:39.523793  255774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt ...
	I1121 14:29:39.523827  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt: {Name:mk2418751bb08ae4f2cae2628ba430b2e731f823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524011  255774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key ...
	I1121 14:29:39.524031  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key: {Name:mk12031f310020bd38886fd870544563c6ab1faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.524255  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:39.524307  255774 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:39.524323  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:39.524353  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:39.524383  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:39.524407  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:39.524445  255774 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:39.525071  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:39.546065  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:39.565880  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:39.585450  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:39.604394  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 14:29:39.623736  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:29:39.642460  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:39.661463  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:29:39.681314  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:39.879137  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:39.899730  255774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:39.918630  255774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:39.935942  255774 ssh_runner.go:195] Run: openssl version
	I1121 14:29:39.943062  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.020861  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026152  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.026209  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.067681  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.077051  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.087944  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092369  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.092434  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.132125  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.142255  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.152828  255774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157171  255774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.157265  255774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.199881  255774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.210053  255774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.214456  255774 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.214524  255774 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-376255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-376255 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.214625  255774 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.214692  255774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.249359  255774 cri.go:89] found id: ""
	I1121 14:29:40.249429  255774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.259121  255774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.270847  255774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.270910  255774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.283266  255774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.283287  255774 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.283341  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1121 14:29:40.293676  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.293725  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.303277  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.313015  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.313073  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.322086  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.330920  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.331015  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.339376  255774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.347984  255774 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.348046  255774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.356683  255774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.404354  255774 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.404455  255774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.435448  255774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.435583  255774 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.435628  255774 kubeadm.go:319] OS: Linux
	I1121 14:29:40.435689  255774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.435827  255774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.435905  255774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.436039  255774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.436108  255774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.436176  255774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.436276  255774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.436351  255774 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.508224  255774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.508370  255774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.508531  255774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.513996  255774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:39.295828  252125 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:29:39.301164  252125 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:29:39.312709  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:39.400897  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:39.429294  252125 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956 for IP: 192.168.103.2
	I1121 14:29:39.429315  252125 certs.go:195] generating shared ca certs ...
	I1121 14:29:39.429332  252125 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.429485  252125 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:29:39.429583  252125 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:29:39.429600  252125 certs.go:257] generating profile certs ...
	I1121 14:29:39.429678  252125 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key
	I1121 14:29:39.429693  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt with IP's: []
	I1121 14:29:39.556088  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt ...
	I1121 14:29:39.556115  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: {Name:mkc697edce2d4ccb5a4a2ccbe74255aef4a205c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556297  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key ...
	I1121 14:29:39.556312  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.key: {Name:mkad7b167b883af61314c3f8b6c71358edc782dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.556419  252125 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d
	I1121 14:29:39.556435  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1121 14:29:39.871499  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d ...
	I1121 14:29:39.871529  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d: {Name:mkc839b1c936af809ed1159ef4599336fd260d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871726  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d ...
	I1121 14:29:39.871748  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d: {Name:mkc2f0abcac84f6547f3e0edb165e90b14fdd7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:39.871882  252125 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt
	I1121 14:29:39.871997  252125 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key.a2c9a71d -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key
	I1121 14:29:39.872096  252125 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key
	I1121 14:29:39.872120  252125 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt with IP's: []
	I1121 14:29:40.083173  252125 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt ...
	I1121 14:29:40.083201  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt: {Name:mkba7efd029f616230e0b3cf14c4f32abac0549e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083385  252125 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key ...
	I1121 14:29:40.083414  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key: {Name:mk24f6fbb57f5dfce4a401be193e0a832a6ccf6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:40.083661  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:29:40.083700  252125 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:29:40.083711  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:29:40.083749  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:29:40.083780  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:29:40.083827  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:29:40.083887  252125 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:29:40.084653  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:29:40.106430  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:29:40.126520  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:29:40.148412  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:29:40.169973  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:29:40.191493  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:29:40.214458  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:29:40.234692  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:29:40.261986  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:29:40.352437  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:29:40.372804  252125 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:29:40.394700  252125 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:29:40.411183  252125 ssh_runner.go:195] Run: openssl version
	I1121 14:29:40.419607  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:29:40.431060  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436371  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.436429  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:29:40.481320  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:29:40.492797  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:29:40.502878  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507432  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.507499  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:29:40.567779  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:29:40.577673  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:29:40.587826  252125 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592472  252125 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.592528  252125 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:29:40.627626  252125 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:29:40.637464  252125 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:29:40.641884  252125 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:29:40.641943  252125 kubeadm.go:401] StartCluster: {Name:no-preload-921956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-921956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:29:40.642030  252125 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:29:40.642085  252125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:29:40.673351  252125 cri.go:89] found id: ""
	I1121 14:29:40.673423  252125 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:29:40.682715  252125 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:29:40.691493  252125 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:29:40.691581  252125 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:29:40.700143  252125 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:29:40.700160  252125 kubeadm.go:158] found existing configuration files:
	
	I1121 14:29:40.700205  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:29:40.708734  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:29:40.708799  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:29:40.717135  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:29:40.726191  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:29:40.726262  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:29:40.734074  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.742647  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:29:40.742709  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:29:40.751091  252125 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:29:40.759770  252125 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:29:40.759841  252125 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:29:40.768253  252125 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:29:40.810825  252125 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:29:40.810892  252125 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:29:40.831836  252125 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:29:40.831940  252125 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:29:40.832026  252125 kubeadm.go:319] OS: Linux
	I1121 14:29:40.832115  252125 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:29:40.832212  252125 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:29:40.832286  252125 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:29:40.832358  252125 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:29:40.832432  252125 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:29:40.832504  252125 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:29:40.832668  252125 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:29:40.832735  252125 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:29:40.895341  252125 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:29:40.895491  252125 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:29:40.895637  252125 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:29:40.901358  252125 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:29:37.249631  249617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:37.262987  249617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:29:37.263020  249617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:37.283444  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:38.138719  249617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:38.138808  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.138810  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-012258 minikube.k8s.io/updated_at=2025_11_21T14_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-012258 minikube.k8s.io/primary=true
	I1121 14:29:38.150782  249617 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:38.225220  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:38.726231  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.225533  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:39.725591  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.225601  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.725734  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:41.226112  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:40.521190  255774 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.521325  255774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.521431  255774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.003970  255774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.240665  255774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.425685  255774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:41.689428  255774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:41.923373  255774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:41.923563  255774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.051973  255774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.052979  255774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-376255 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:29:42.277531  255774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:42.491572  255774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:42.605458  255774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:42.605535  255774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:42.870659  255774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:43.039072  255774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:43.228611  255774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:43.489903  255774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:43.563271  255774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:43.563948  255774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:43.568453  255774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:39.727688  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:39.728083  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:39.728134  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:39.728197  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:39.758413  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:39.758436  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:39.758441  213058 cri.go:89] found id: ""
	I1121 14:29:39.758452  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:39.758508  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.763439  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.767912  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:39.767980  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:39.802923  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:39.802948  213058 cri.go:89] found id: ""
	I1121 14:29:39.802957  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:39.803013  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.807778  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:39.807853  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:39.835286  213058 cri.go:89] found id: ""
	I1121 14:29:39.835314  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.835335  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:39.835343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:39.835408  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:39.864986  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:39.865034  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:39.865040  213058 cri.go:89] found id: ""
	I1121 14:29:39.865050  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:39.865105  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.869441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.873676  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:39.873739  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:39.902671  213058 cri.go:89] found id: ""
	I1121 14:29:39.902698  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.902707  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:39.902715  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:39.902762  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:39.933452  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:39.933477  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:39.933483  213058 cri.go:89] found id: ""
	I1121 14:29:39.933492  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:39.933557  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.938051  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:39.942029  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:39.942094  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:39.969991  213058 cri.go:89] found id: ""
	I1121 14:29:39.970018  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.970028  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:39.970036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:39.970086  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:39.997381  213058 cri.go:89] found id: ""
	I1121 14:29:39.997406  213058 logs.go:282] 0 containers: []
	W1121 14:29:39.997417  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:39.997429  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:39.997443  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:40.027188  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:40.027213  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:40.067878  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:40.067906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:40.101358  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:40.101388  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:40.115674  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:40.115704  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:40.153845  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:40.153871  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:40.188913  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:40.188944  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:40.244995  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:40.245033  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:40.351506  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:40.351558  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:40.417221  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:40.417244  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:40.417263  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:40.457789  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:40.457836  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.520712  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:40.520748  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.056648  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:43.057094  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:43.057150  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:43.057204  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:43.085236  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.085260  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.085265  213058 cri.go:89] found id: ""
	I1121 14:29:43.085275  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:43.085333  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.089868  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.094074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:43.094134  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:43.122420  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.122447  213058 cri.go:89] found id: ""
	I1121 14:29:43.122457  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:43.122512  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.126830  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:43.126892  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:43.156518  213058 cri.go:89] found id: ""
	I1121 14:29:43.156566  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.156577  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:43.156584  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:43.156646  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:43.185212  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:43.185233  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.185238  213058 cri.go:89] found id: ""
	I1121 14:29:43.185277  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:43.185338  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.190000  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.194074  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:43.194131  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:43.224175  213058 cri.go:89] found id: ""
	I1121 14:29:43.224201  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.224211  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:43.224218  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:43.224277  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:43.258260  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:43.258292  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.258299  213058 cri.go:89] found id: ""
	I1121 14:29:43.258310  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:43.258378  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.263276  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:43.268195  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:43.268264  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:43.303269  213058 cri.go:89] found id: ""
	I1121 14:29:43.303300  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.303311  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:43.303319  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:43.303379  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:43.333956  213058 cri.go:89] found id: ""
	I1121 14:29:43.333985  213058 logs.go:282] 0 containers: []
	W1121 14:29:43.333995  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:43.334007  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:43.334021  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:43.366338  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:43.366369  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:43.458987  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:43.459027  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:43.497960  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:43.497995  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:43.539997  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:43.540035  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:43.575882  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:43.575911  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:40.903405  252125 out.go:252]   - Generating certificates and keys ...
	I1121 14:29:40.903502  252125 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:29:40.903630  252125 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:29:41.180390  252125 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:29:41.211121  252125 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:29:41.523007  252125 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:29:42.461521  252125 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:29:42.641495  252125 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:29:42.641701  252125 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.773640  252125 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:29:42.773843  252125 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-921956] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:29:42.921369  252125 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:29:43.256203  252125 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:29:43.834470  252125 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:29:43.834645  252125 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:29:43.949422  252125 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:29:44.093777  252125 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:29:44.227287  252125 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:29:44.509482  252125 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:29:44.696294  252125 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:29:44.696767  252125 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:29:44.705846  252125 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:29:43.573374  255774 out.go:252]   - Booting up control plane ...
	I1121 14:29:43.573510  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:43.573669  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:43.573781  255774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:43.590344  255774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:43.590494  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:43.599838  255774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:43.600184  255774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:43.600247  255774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:43.720721  255774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:43.720878  255774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:44.721899  255774 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001196965s
	I1121 14:29:44.724830  255774 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:44.724972  255774 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1121 14:29:44.725131  255774 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:44.725253  255774 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:41.726266  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.225460  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:42.725727  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.225740  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.725669  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.225350  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:44.725651  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.226025  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:45.725289  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:46.226316  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:43.632243  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:43.632278  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:43.681909  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:43.681959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:43.723402  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:43.723454  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:43.776606  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:43.776641  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:43.793171  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:43.793200  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:29:43.854264  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:29:43.854293  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:43.854308  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.383659  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:29:46.384075  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:29:46.384128  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:29:46.384191  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:29:46.441629  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.441734  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:46.441754  213058 cri.go:89] found id: ""
	I1121 14:29:46.441776  213058 logs.go:282] 2 containers: [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:29:46.441873  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.447714  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.453337  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:29:46.453422  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:29:46.497451  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.497475  213058 cri.go:89] found id: ""
	I1121 14:29:46.497485  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:29:46.497585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.504731  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:29:46.504801  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:29:46.562972  213058 cri.go:89] found id: ""
	I1121 14:29:46.563014  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.563027  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:29:46.563036  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:29:46.563287  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:29:46.611186  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:46.611216  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:46.611221  213058 cri.go:89] found id: ""
	I1121 14:29:46.611231  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:29:46.611289  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.620404  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.626388  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:29:46.626559  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:29:46.674192  213058 cri.go:89] found id: ""
	I1121 14:29:46.674247  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.674259  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:29:46.674267  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:29:46.674448  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:29:46.749738  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:46.749765  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:46.749771  213058 cri.go:89] found id: ""
	I1121 14:29:46.749780  213058 logs.go:282] 2 containers: [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:29:46.749835  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.756273  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:29:46.763986  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:29:46.764120  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:29:46.811858  213058 cri.go:89] found id: ""
	I1121 14:29:46.811883  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.811901  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:29:46.811909  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:29:46.811963  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:29:46.849599  213058 cri.go:89] found id: ""
	I1121 14:29:46.849645  213058 logs.go:282] 0 containers: []
	W1121 14:29:46.849655  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:29:46.849666  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:29:46.849683  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:29:46.913988  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:29:46.914024  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:29:46.953189  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:29:46.953227  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:29:47.001663  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:29:47.001705  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:29:47.041106  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:29:47.041137  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:29:47.107673  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:29:47.107712  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:29:47.240432  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:29:47.240473  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:29:47.288852  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:29:47.288894  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1121 14:29:46.531314  255774 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.80645272s
	I1121 14:29:47.509316  255774 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.784421033s
	I1121 14:29:49.226647  255774 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501794549s
	I1121 14:29:49.239409  255774 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:49.252719  255774 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:49.264076  255774 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:49.264371  255774 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-376255 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:49.274799  255774 kubeadm.go:319] [bootstrap-token] Using token: 8nwcfl.9utqukqcvuro6a4p
	I1121 14:29:44.769338  252125 out.go:252]   - Booting up control plane ...
	I1121 14:29:44.769476  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:29:44.769652  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:29:44.769771  252125 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:29:44.769940  252125 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:29:44.770087  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:29:44.778391  252125 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:29:44.779655  252125 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:29:44.779729  252125 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:29:44.894196  252125 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:29:44.894364  252125 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:29:45.895053  252125 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000974959s
	I1121 14:29:45.898754  252125 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:29:45.898875  252125 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1121 14:29:45.899003  252125 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:29:45.899149  252125 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:29:48.621169  252125 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.722350043s
	I1121 14:29:49.059709  252125 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.160801257s
	I1121 14:29:49.276414  255774 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:49.276590  255774 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:49.280532  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:49.287374  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:49.290401  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:49.293308  255774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:49.297552  255774 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:49.632747  255774 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:46.726037  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.228665  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:47.725338  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.226199  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:48.725959  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.225812  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:49.725337  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.225293  249617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.310282  249617 kubeadm.go:1114] duration metric: took 12.17154172s to wait for elevateKubeSystemPrivileges
	I1121 14:29:50.310322  249617 kubeadm.go:403] duration metric: took 23.370802852s to StartCluster
	I1121 14:29:50.310347  249617 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.310438  249617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:50.311864  249617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:50.312167  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:50.312169  249617 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:50.312267  249617 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:50.312352  249617 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312372  249617 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-012258"
	I1121 14:29:50.312403  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.312458  249617 config.go:182] Loaded profile config "old-k8s-version-012258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:29:50.312516  249617 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-012258"
	I1121 14:29:50.312530  249617 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-012258"
	I1121 14:29:50.312827  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.312965  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.314603  249617 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:50.316238  249617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:50.339724  249617 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:50.056893  255774 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:50.634602  255774 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:50.635720  255774 kubeadm.go:319] 
	I1121 14:29:50.635840  255774 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:50.635916  255774 kubeadm.go:319] 
	I1121 14:29:50.636085  255774 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:50.636139  255774 kubeadm.go:319] 
	I1121 14:29:50.636189  255774 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:50.636300  255774 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:50.636386  255774 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:50.636448  255774 kubeadm.go:319] 
	I1121 14:29:50.636574  255774 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:50.636584  255774 kubeadm.go:319] 
	I1121 14:29:50.636647  255774 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:50.636652  255774 kubeadm.go:319] 
	I1121 14:29:50.636709  255774 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:50.636796  255774 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:50.636878  255774 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:50.636886  255774 kubeadm.go:319] 
	I1121 14:29:50.636981  255774 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:50.637083  255774 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:50.637090  255774 kubeadm.go:319] 
	I1121 14:29:50.637247  255774 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637414  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:50.637449  255774 kubeadm.go:319] 	--control-plane 
	I1121 14:29:50.637460  255774 kubeadm.go:319] 
	I1121 14:29:50.637571  255774 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:50.637580  255774 kubeadm.go:319] 
	I1121 14:29:50.637672  255774 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 8nwcfl.9utqukqcvuro6a4p \
	I1121 14:29:50.637785  255774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:50.642202  255774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:50.642513  255774 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:50.642647  255774 cni.go:84] Creating CNI manager for ""
	I1121 14:29:50.642693  255774 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:50.645524  255774 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:50.339929  249617 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-012258"
	I1121 14:29:50.339977  249617 host.go:66] Checking if "old-k8s-version-012258" exists ...
	I1121 14:29:50.340433  249617 cli_runner.go:164] Run: docker container inspect old-k8s-version-012258 --format={{.State.Status}}
	I1121 14:29:50.341133  249617 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.341154  249617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:50.341208  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.377822  249617 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.377846  249617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:50.377844  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.377907  249617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-012258
	I1121 14:29:50.410483  249617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/old-k8s-version-012258/id_rsa Username:docker}
	I1121 14:29:50.415901  249617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:50.468678  249617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:50.503643  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:50.536480  249617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:50.667362  249617 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:50.668484  249617 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:29:50.954598  249617 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:50.401999  252125 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502477764s
	I1121 14:29:50.419850  252125 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:29:50.933016  252125 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:29:50.948821  252125 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:29:50.949093  252125 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-921956 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:29:50.961417  252125 kubeadm.go:319] [bootstrap-token] Using token: uhuim0.7wh8hbt7v76eo7qs
	I1121 14:29:50.955828  249617 addons.go:530] duration metric: took 643.55365ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:51.174831  249617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-012258" context rescaled to 1 replicas
	I1121 14:29:50.963415  252125 out.go:252]   - Configuring RBAC rules ...
	I1121 14:29:50.963588  252125 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:29:50.971176  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:29:50.980644  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:29:50.985255  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:29:50.989946  252125 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:29:50.994015  252125 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:29:51.128309  252125 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:29:51.550178  252125 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:29:52.128624  252125 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:29:52.129402  252125 kubeadm.go:319] 
	I1121 14:29:52.129496  252125 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:29:52.129528  252125 kubeadm.go:319] 
	I1121 14:29:52.129657  252125 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:29:52.129669  252125 kubeadm.go:319] 
	I1121 14:29:52.129705  252125 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:29:52.129798  252125 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:29:52.129906  252125 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:29:52.129923  252125 kubeadm.go:319] 
	I1121 14:29:52.129995  252125 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:29:52.130004  252125 kubeadm.go:319] 
	I1121 14:29:52.130078  252125 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:29:52.130087  252125 kubeadm.go:319] 
	I1121 14:29:52.130170  252125 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:29:52.130304  252125 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:29:52.130418  252125 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:29:52.130446  252125 kubeadm.go:319] 
	I1121 14:29:52.130574  252125 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:29:52.130677  252125 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:29:52.130685  252125 kubeadm.go:319] 
	I1121 14:29:52.130797  252125 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.130966  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb \
	I1121 14:29:52.131000  252125 kubeadm.go:319] 	--control-plane 
	I1121 14:29:52.131035  252125 kubeadm.go:319] 
	I1121 14:29:52.131212  252125 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:29:52.131230  252125 kubeadm.go:319] 
	I1121 14:29:52.131343  252125 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uhuim0.7wh8hbt7v76eo7qs \
	I1121 14:29:52.131485  252125 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2aad198f79b3258593291a08f0028a72548d0fc82d6b54639b7d7d17a52adfdb 
	I1121 14:29:52.132830  252125 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:29:52.132967  252125 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:29:52.133003  252125 cni.go:84] Creating CNI manager for ""
	I1121 14:29:52.133014  252125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:29:52.134968  252125 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:29:52.136241  252125 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:52.141107  252125 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:52.141131  252125 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:52.155585  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:52.395340  252125 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:52.395422  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.395526  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-921956 minikube.k8s.io/updated_at=2025_11_21T14_29_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-921956 minikube.k8s.io/primary=true
	I1121 14:29:52.481012  252125 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:52.481125  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.982198  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.481748  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.981282  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.646815  255774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:29:50.654615  255774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:29:50.654642  255774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:29:50.673887  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:29:50.944978  255774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:29:50.945143  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:50.945309  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-376255 minikube.k8s.io/updated_at=2025_11_21T14_29_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=default-k8s-diff-port-376255 minikube.k8s.io/primary=true
	I1121 14:29:50.960009  255774 ops.go:34] apiserver oom_adj: -16
	I1121 14:29:51.036596  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:51.537134  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.037345  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:52.536941  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.037592  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:53.536966  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.036678  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.536697  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.037499  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.536808  255774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.610391  255774 kubeadm.go:1114] duration metric: took 4.665295307s to wait for elevateKubeSystemPrivileges
	I1121 14:29:55.610426  255774 kubeadm.go:403] duration metric: took 15.395907943s to StartCluster
	I1121 14:29:55.610448  255774 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.610511  255774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:55.612071  255774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:55.612346  255774 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:55.612498  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:55.612612  255774 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:55.612696  255774 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612713  255774 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.612745  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.612775  255774 config.go:182] Loaded profile config "default-k8s-diff-port-376255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:55.612835  255774 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-376255"
	I1121 14:29:55.612852  255774 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376255"
	I1121 14:29:55.613218  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613392  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.613476  255774 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:55.615420  255774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:55.641842  255774 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-376255"
	I1121 14:29:55.641893  255774 host.go:66] Checking if "default-k8s-diff-port-376255" exists ...
	I1121 14:29:55.642317  255774 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-376255 --format={{.State.Status}}
	I1121 14:29:55.647007  255774 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:55.648771  255774 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.648807  255774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:55.648882  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.679690  255774 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.679713  255774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:55.679780  255774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-376255
	I1121 14:29:55.680868  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.703091  255774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/default-k8s-diff-port-376255/id_rsa Username:docker}
	I1121 14:29:55.713751  255774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:55.781953  255774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:55.795189  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:55.811872  255774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:55.895061  255774 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:55.896386  255774 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:29:56.162438  255774 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:29:52.672645  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:55.172665  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:29:54.481750  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:54.981303  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.481778  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:55.981846  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.481336  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:56.981822  252125 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:29:57.056720  252125 kubeadm.go:1114] duration metric: took 4.66135199s to wait for elevateKubeSystemPrivileges
	I1121 14:29:57.056760  252125 kubeadm.go:403] duration metric: took 16.414821557s to StartCluster
	I1121 14:29:57.056783  252125 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.056866  252125 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:29:57.059279  252125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:29:57.059591  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:29:57.059595  252125 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:29:57.059668  252125 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:29:57.059755  252125 addons.go:70] Setting storage-provisioner=true in profile "no-preload-921956"
	I1121 14:29:57.059780  252125 addons.go:239] Setting addon storage-provisioner=true in "no-preload-921956"
	I1121 14:29:57.059783  252125 addons.go:70] Setting default-storageclass=true in profile "no-preload-921956"
	I1121 14:29:57.059810  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.059818  252125 config.go:182] Loaded profile config "no-preload-921956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:29:57.059810  252125 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-921956"
	I1121 14:29:57.060267  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.060366  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.061615  252125 out.go:179] * Verifying Kubernetes components...
	I1121 14:29:57.063049  252125 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:29:57.087511  252125 addons.go:239] Setting addon default-storageclass=true in "no-preload-921956"
	I1121 14:29:57.087574  252125 host.go:66] Checking if "no-preload-921956" exists ...
	I1121 14:29:57.088046  252125 cli_runner.go:164] Run: docker container inspect no-preload-921956 --format={{.State.Status}}
	I1121 14:29:57.088842  252125 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:29:57.090553  252125 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.090577  252125 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:29:57.090634  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.113518  252125 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.113567  252125 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:29:57.113644  252125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-921956
	I1121 14:29:57.116604  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.140626  252125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/no-preload-921956/id_rsa Username:docker}
	I1121 14:29:57.162241  252125 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:29:57.221336  252125 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:29:57.237060  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:29:57.259845  252125 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:29:57.393470  252125 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:29:57.394577  252125 node_ready.go:35] waiting up to 6m0s for node "no-preload-921956" to be "Ready" ...
	I1121 14:29:57.623024  252125 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:29:57.414885  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.125971322s)
	W1121 14:29:57.414929  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1121 14:29:57.414939  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:29:57.414952  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:29:57.462838  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:29:57.462881  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:29:57.526637  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:29:57.526671  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:29:57.574224  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:29:57.574259  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:29:57.624430  252125 addons.go:530] duration metric: took 564.759261ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:57.898009  252125 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-921956" context rescaled to 1 replicas
	I1121 14:29:56.163632  255774 addons.go:530] duration metric: took 551.031985ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:29:56.399602  255774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-376255" context rescaled to 1 replicas
	W1121 14:29:57.899680  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:29:57.174208  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:29:59.672116  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:00.114035  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1121 14:29:59.398191  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:01.898360  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:29:59.900344  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.900816  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:04.400331  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	W1121 14:30:01.672252  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	W1121 14:30:04.171805  249617 node_ready.go:57] node "old-k8s-version-012258" has "Ready":"False" status (will retry)
	I1121 14:30:05.672011  249617 node_ready.go:49] node "old-k8s-version-012258" is "Ready"
	I1121 14:30:05.672046  249617 node_ready.go:38] duration metric: took 15.003519412s for node "old-k8s-version-012258" to be "Ready" ...
	I1121 14:30:05.672064  249617 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:05.672125  249617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:05.689799  249617 api_server.go:72] duration metric: took 15.377593574s to wait for apiserver process to appear ...
	I1121 14:30:05.689974  249617 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:05.690001  249617 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:30:05.696217  249617 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:30:05.697950  249617 api_server.go:141] control plane version: v1.28.0
	I1121 14:30:05.697978  249617 api_server.go:131] duration metric: took 7.994891ms to wait for apiserver health ...
	I1121 14:30:05.697990  249617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:05.702726  249617 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:05.702769  249617 system_pods.go:61] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.702778  249617 system_pods.go:61] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.702785  249617 system_pods.go:61] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.702796  249617 system_pods.go:61] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.702808  249617 system_pods.go:61] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.702818  249617 system_pods.go:61] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.702822  249617 system_pods.go:61] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.702829  249617 system_pods.go:61] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.702837  249617 system_pods.go:74] duration metric: took 4.84094ms to wait for pod list to return data ...
	I1121 14:30:05.702852  249617 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:05.705127  249617 default_sa.go:45] found service account: "default"
	I1121 14:30:05.705151  249617 default_sa.go:55] duration metric: took 2.290103ms for default service account to be created ...
	I1121 14:30:05.705161  249617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:05.710235  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.710318  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.710330  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.710337  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.710367  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.710374  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.710380  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.710385  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.710404  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.710597  249617 retry.go:31] will retry after 257.065607ms: missing components: kube-dns
	I1121 14:30:05.972608  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:05.972648  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:05.972657  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:05.972665  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:05.972676  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:05.972682  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:05.972687  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:05.972692  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:05.972707  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:05.972726  249617 retry.go:31] will retry after 339.692313ms: missing components: kube-dns
	I1121 14:30:06.317124  249617 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:06.317155  249617 system_pods.go:89] "coredns-5dd5756b68-vst4c" [3ca4df79-d875-498c-91b8-059d4f975bd0] Running
	I1121 14:30:06.317160  249617 system_pods.go:89] "etcd-old-k8s-version-012258" [2316d2c5-5731-4804-b900-b3ed4289f3d5] Running
	I1121 14:30:06.317163  249617 system_pods.go:89] "kindnet-f6t7s" [bd28a6b5-0214-42be-8883-1adf1217761c] Running
	I1121 14:30:06.317167  249617 system_pods.go:89] "kube-apiserver-old-k8s-version-012258" [fb018e50-0892-4250-9f7d-16731a31f2e5] Running
	I1121 14:30:06.317171  249617 system_pods.go:89] "kube-controller-manager-old-k8s-version-012258" [7e21a806-9ed1-4e34-a635-f92287ab6545] Running
	I1121 14:30:06.317175  249617 system_pods.go:89] "kube-proxy-wsp2w" [bc079c02-40ff-4f10-947b-76f1e9784572] Running
	I1121 14:30:06.317178  249617 system_pods.go:89] "kube-scheduler-old-k8s-version-012258" [925c4663-2ad7-41a1-9606-3fbfe8e0904d] Running
	I1121 14:30:06.317181  249617 system_pods.go:89] "storage-provisioner" [4195d236-52f6-4bfd-b47a-9cd7cd89bedd] Running
	I1121 14:30:06.317188  249617 system_pods.go:126] duration metric: took 612.020803ms to wait for k8s-apps to be running ...
	I1121 14:30:06.317194  249617 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:06.317250  249617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:06.332295  249617 system_svc.go:56] duration metric: took 15.088564ms WaitForService to wait for kubelet
	I1121 14:30:06.332331  249617 kubeadm.go:587] duration metric: took 16.020134285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:06.332357  249617 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:06.338044  249617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:06.338071  249617 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:06.338084  249617 node_conditions.go:105] duration metric: took 5.72136ms to run NodePressure ...
	I1121 14:30:06.338096  249617 start.go:242] waiting for startup goroutines ...
	I1121 14:30:06.338102  249617 start.go:247] waiting for cluster config update ...
	I1121 14:30:06.338113  249617 start.go:256] writing updated cluster config ...
	I1121 14:30:06.338382  249617 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:06.342534  249617 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:06.347323  249617 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.352062  249617 pod_ready.go:94] pod "coredns-5dd5756b68-vst4c" is "Ready"
	I1121 14:30:06.352087  249617 pod_ready.go:86] duration metric: took 4.697932ms for pod "coredns-5dd5756b68-vst4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.354946  249617 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.359326  249617 pod_ready.go:94] pod "etcd-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.359355  249617 pod_ready.go:86] duration metric: took 4.388182ms for pod "etcd-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.362007  249617 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.366060  249617 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.366081  249617 pod_ready.go:86] duration metric: took 4.051984ms for pod "kube-apiserver-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.368789  249617 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.746914  249617 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-012258" is "Ready"
	I1121 14:30:06.746952  249617 pod_ready.go:86] duration metric: took 378.141903ms for pod "kube-controller-manager-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:06.947790  249617 pod_ready.go:83] waiting for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.347266  249617 pod_ready.go:94] pod "kube-proxy-wsp2w" is "Ready"
	I1121 14:30:07.347291  249617 pod_ready.go:86] duration metric: took 399.477159ms for pod "kube-proxy-wsp2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.547233  249617 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946728  249617 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-012258" is "Ready"
	I1121 14:30:07.946756  249617 pod_ready.go:86] duration metric: took 399.500525ms for pod "kube-scheduler-old-k8s-version-012258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:07.946772  249617 pod_ready.go:40] duration metric: took 1.604187461s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.009909  249617 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1121 14:30:08.014607  249617 out.go:203] 
	W1121 14:30:08.016075  249617 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:30:08.020782  249617 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:30:08.022622  249617 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-012258" cluster and "default" namespace by default
	I1121 14:30:05.115052  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1121 14:30:05.115115  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:05.115188  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:05.143819  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.143839  213058 cri.go:89] found id: "9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.143843  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:05.143846  213058 cri.go:89] found id: ""
	I1121 14:30:05.143853  213058 logs.go:282] 3 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:05.143912  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.148585  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.152984  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.156944  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:05.157004  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:05.185404  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.185430  213058 cri.go:89] found id: ""
	I1121 14:30:05.185440  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:05.185498  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.190360  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:05.190432  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:05.222964  213058 cri.go:89] found id: ""
	I1121 14:30:05.222989  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.222999  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:05.223006  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:05.223058  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:05.254414  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:05.254436  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:05.254440  213058 cri.go:89] found id: ""
	I1121 14:30:05.254447  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:05.254505  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.258766  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.262456  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:05.262524  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:05.288454  213058 cri.go:89] found id: ""
	I1121 14:30:05.288486  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.288496  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:05.288505  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:05.288598  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:05.317814  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:05.317841  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:05.317847  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.317851  213058 cri.go:89] found id: ""
	I1121 14:30:05.317861  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:05.317930  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.322506  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.326684  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:05.330828  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:05.330957  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:05.360073  213058 cri.go:89] found id: ""
	I1121 14:30:05.360098  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.360107  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:05.360116  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:05.360171  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:05.388524  213058 cri.go:89] found id: ""
	I1121 14:30:05.388561  213058 logs.go:282] 0 containers: []
	W1121 14:30:05.388573  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:05.388587  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:05.388602  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:05.427247  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:05.427279  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:05.517583  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:05.517615  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:05.556205  213058 logs.go:123] Gathering logs for kube-apiserver [9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1] ...
	I1121 14:30:05.556238  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a2b62669bb541c95ccc48a3bee10da7faccb77514f7c516ac47db9503f234b1"
	I1121 14:30:05.601637  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:05.601692  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:05.642125  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:05.642167  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:05.707252  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:05.707295  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:05.747947  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:05.747990  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:05.767646  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:05.767678  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:04.398534  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.897181  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:08.897492  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	W1121 14:30:06.900285  255774 node_ready.go:57] node "default-k8s-diff-port-376255" has "Ready":"False" status (will retry)
	I1121 14:30:07.400113  255774 node_ready.go:49] node "default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:07.400148  255774 node_ready.go:38] duration metric: took 11.503726167s for node "default-k8s-diff-port-376255" to be "Ready" ...
	I1121 14:30:07.400166  255774 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:07.400227  255774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:07.416428  255774 api_server.go:72] duration metric: took 11.804040955s to wait for apiserver process to appear ...
	I1121 14:30:07.416462  255774 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:07.416487  255774 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1121 14:30:07.423355  255774 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1121 14:30:07.424441  255774 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:07.424471  255774 api_server.go:131] duration metric: took 8.001103ms to wait for apiserver health ...
	I1121 14:30:07.424480  255774 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:07.428816  255774 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:07.428856  255774 system_pods.go:61] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.428866  255774 system_pods.go:61] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.428874  255774 system_pods.go:61] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.428880  255774 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.428886  255774 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.428891  255774 system_pods.go:61] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.428899  255774 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.428912  255774 system_pods.go:61] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.428921  255774 system_pods.go:74] duration metric: took 4.433771ms to wait for pod list to return data ...
	I1121 14:30:07.428932  255774 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:07.431771  255774 default_sa.go:45] found service account: "default"
	I1121 14:30:07.431794  255774 default_sa.go:55] duration metric: took 2.856811ms for default service account to be created ...
	I1121 14:30:07.431804  255774 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:07.435787  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.435816  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.435821  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.435826  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.435830  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.435833  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.435836  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.435841  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.435846  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.435871  255774 retry.go:31] will retry after 217.060579ms: missing components: kube-dns
	I1121 14:30:07.656900  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.656930  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.656937  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.656945  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.656950  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.656955  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.656959  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.656964  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.656970  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.656989  255774 retry.go:31] will retry after 330.648304ms: missing components: kube-dns
	I1121 14:30:07.995514  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:07.995612  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:07.995626  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:07.995636  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:07.995642  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:07.995653  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:07.995659  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:07.995664  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:07.995683  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:07.995713  255774 retry.go:31] will retry after 466.383408ms: missing components: kube-dns
	I1121 14:30:08.466385  255774 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:08.466414  255774 system_pods.go:89] "coredns-66bc5c9577-fr27b" [aecd7b98-657f-464e-9860-d060714bbc5d] Running
	I1121 14:30:08.466419  255774 system_pods.go:89] "etcd-default-k8s-diff-port-376255" [b46a8392-a768-4a1b-9a89-b0c3c349dc99] Running
	I1121 14:30:08.466423  255774 system_pods.go:89] "kindnet-cdzd4" [f954f962-f79a-49e5-8b79-5fbd3c544ffc] Running
	I1121 14:30:08.466427  255774 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376255" [727fff75-5ef1-4665-a510-82662517dd6f] Running
	I1121 14:30:08.466430  255774 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376255" [d642ccae-4e43-4b4b-9d8d-51515a1aae9c] Running
	I1121 14:30:08.466435  255774 system_pods.go:89] "kube-proxy-hdplf" [f4b8f54c-361f-4748-9f31-92ffb753f404] Running
	I1121 14:30:08.466438  255774 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376255" [72272f12-0226-4c07-9867-6cedf46539a4] Running
	I1121 14:30:08.466441  255774 system_pods.go:89] "storage-provisioner" [4fa1d228-0310-45d2-87b6-91ce085f1f58] Running
	I1121 14:30:08.466448  255774 system_pods.go:126] duration metric: took 1.034639333s to wait for k8s-apps to be running ...
	I1121 14:30:08.466454  255774 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:08.466495  255774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:08.480058  255774 system_svc.go:56] duration metric: took 13.59071ms WaitForService to wait for kubelet
	I1121 14:30:08.480087  255774 kubeadm.go:587] duration metric: took 12.867708638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:08.480104  255774 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:08.483054  255774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:08.483077  255774 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:08.483089  255774 node_conditions.go:105] duration metric: took 2.980591ms to run NodePressure ...
	I1121 14:30:08.483101  255774 start.go:242] waiting for startup goroutines ...
	I1121 14:30:08.483107  255774 start.go:247] waiting for cluster config update ...
	I1121 14:30:08.483116  255774 start.go:256] writing updated cluster config ...
	I1121 14:30:08.483378  255774 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:08.487457  255774 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:08.490869  255774 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.495613  255774 pod_ready.go:94] pod "coredns-66bc5c9577-fr27b" is "Ready"
	I1121 14:30:08.495638  255774 pod_ready.go:86] duration metric: took 4.745112ms for pod "coredns-66bc5c9577-fr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.498070  255774 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.502098  255774 pod_ready.go:94] pod "etcd-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.502122  255774 pod_ready.go:86] duration metric: took 4.029361ms for pod "etcd-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.504276  255774 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.508229  255774 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.508250  255774 pod_ready.go:86] duration metric: took 3.957821ms for pod "kube-apiserver-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.510387  255774 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:08.891344  255774 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:08.891369  255774 pod_ready.go:86] duration metric: took 380.959206ms for pod "kube-controller-manager-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.091636  255774 pod_ready.go:83] waiting for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.492078  255774 pod_ready.go:94] pod "kube-proxy-hdplf" is "Ready"
	I1121 14:30:09.492108  255774 pod_ready.go:86] duration metric: took 400.444722ms for pod "kube-proxy-hdplf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:09.693278  255774 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092105  255774 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-376255" is "Ready"
	I1121 14:30:10.092133  255774 pod_ready.go:86] duration metric: took 398.824976ms for pod "kube-scheduler-default-k8s-diff-port-376255" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:10.092146  255774 pod_ready.go:40] duration metric: took 1.604655578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:10.138628  255774 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:10.140593  255774 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-376255" cluster and "default" namespace by default
	I1121 14:30:08.754284  213058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.986586875s)
	W1121 14:30:08.754342  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60538->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1121 14:30:08.754352  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:08.754366  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:08.789119  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:08.789149  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:08.842933  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:08.842974  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:08.880878  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:08.880919  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:08.910920  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:08.910953  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.440020  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:11.440496  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:11.440556  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:11.440601  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:11.472645  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:11.472669  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:11.472674  213058 cri.go:89] found id: ""
	I1121 14:30:11.472683  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:11.472748  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.478061  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.482946  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:11.483034  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:11.517693  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:11.517722  213058 cri.go:89] found id: ""
	I1121 14:30:11.517732  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:11.517797  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.523621  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:11.523699  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:11.559155  213058 cri.go:89] found id: ""
	I1121 14:30:11.559194  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.559204  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:11.559212  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:11.559271  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:11.595093  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.595127  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:11.595133  213058 cri.go:89] found id: ""
	I1121 14:30:11.595143  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:11.595194  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.600085  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.604973  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:11.605048  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:11.639606  213058 cri.go:89] found id: ""
	I1121 14:30:11.639636  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.639647  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:11.639653  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:11.639713  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:11.684373  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.684400  213058 cri.go:89] found id: "94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.684405  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.684410  213058 cri.go:89] found id: ""
	I1121 14:30:11.684421  213058 logs.go:282] 3 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:11.684482  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.689732  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.695253  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:11.701315  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:11.701388  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:11.732802  213058 cri.go:89] found id: ""
	I1121 14:30:11.732831  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.732841  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:11.732848  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:11.732907  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:11.761686  213058 cri.go:89] found id: ""
	I1121 14:30:11.761717  213058 logs.go:282] 0 containers: []
	W1121 14:30:11.761729  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:11.761741  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:11.761756  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:11.816634  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:11.816670  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:11.846024  213058 logs.go:123] Gathering logs for kube-controller-manager [94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3] ...
	I1121 14:30:11.846055  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 94ee5c394341614224319acbb63aafbedcfdbe7f50d3f046a56ab246dc32ceb3"
	I1121 14:30:11.876932  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:11.876964  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:11.912984  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:11.913018  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:11.965381  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:11.965423  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:11.997477  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:11.997509  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:12.011497  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:12.011524  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:12.071024  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:12.071049  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:12.071065  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:12.106865  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:12.106898  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:12.141245  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:12.141276  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:12.176551  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:12.176600  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:12.268742  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:12.268780  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	W1121 14:30:10.897620  252125 node_ready.go:57] node "no-preload-921956" has "Ready":"False" status (will retry)
	I1121 14:30:11.398100  252125 node_ready.go:49] node "no-preload-921956" is "Ready"
	I1121 14:30:11.398128  252125 node_ready.go:38] duration metric: took 14.003530083s for node "no-preload-921956" to be "Ready" ...
	I1121 14:30:11.398142  252125 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:30:11.398195  252125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:30:11.412043  252125 api_server.go:72] duration metric: took 14.35241025s to wait for apiserver process to appear ...
	I1121 14:30:11.412070  252125 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:30:11.412087  252125 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:30:11.417254  252125 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:30:11.418517  252125 api_server.go:141] control plane version: v1.34.1
	I1121 14:30:11.418570  252125 api_server.go:131] duration metric: took 6.492303ms to wait for apiserver health ...
	I1121 14:30:11.418581  252125 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:30:11.421927  252125 system_pods.go:59] 8 kube-system pods found
	I1121 14:30:11.422024  252125 system_pods.go:61] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.422034  252125 system_pods.go:61] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.422047  252125 system_pods.go:61] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.422059  252125 system_pods.go:61] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.422069  252125 system_pods.go:61] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.422073  252125 system_pods.go:61] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.422077  252125 system_pods.go:61] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.422082  252125 system_pods.go:61] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.422094  252125 system_pods.go:74] duration metric: took 3.505153ms to wait for pod list to return data ...
	I1121 14:30:11.422109  252125 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:30:11.424685  252125 default_sa.go:45] found service account: "default"
	I1121 14:30:11.424710  252125 default_sa.go:55] duration metric: took 2.591611ms for default service account to be created ...
	I1121 14:30:11.424722  252125 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:30:11.427627  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.427680  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.427689  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.427703  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.427713  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.427721  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.427726  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.427731  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.427737  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.427768  252125 retry.go:31] will retry after 234.428318ms: missing components: kube-dns
	I1121 14:30:11.669788  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.669831  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.669840  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.669850  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.669858  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.669865  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.669871  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.669877  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.669893  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.669919  252125 retry.go:31] will retry after 250.085803ms: missing components: kube-dns
	I1121 14:30:11.924517  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:11.924602  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:11.924614  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:11.924627  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:11.924633  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:11.924642  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:11.924647  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:11.924653  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:11.924661  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:11.924682  252125 retry.go:31] will retry after 441.862758ms: missing components: kube-dns
	I1121 14:30:12.371065  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.371110  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:30:12.371122  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.371131  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.371136  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.371142  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.371147  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.371158  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.371170  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:30:12.371189  252125 retry.go:31] will retry after 502.578888ms: missing components: kube-dns
	I1121 14:30:12.879209  252125 system_pods.go:86] 8 kube-system pods found
	I1121 14:30:12.879243  252125 system_pods.go:89] "coredns-66bc5c9577-s4rzb" [4941c273-72bf-49af-ad72-793444a43d21] Running
	I1121 14:30:12.879249  252125 system_pods.go:89] "etcd-no-preload-921956" [2b973978-8ff2-488f-b54b-80bb44d4f320] Running
	I1121 14:30:12.879253  252125 system_pods.go:89] "kindnet-kf24h" [c698f297-3ff4-4f90-a871-5c4c944b9e61] Running
	I1121 14:30:12.879258  252125 system_pods.go:89] "kube-apiserver-no-preload-921956" [11865678-b4f0-4cb1-9f82-9c59edf0d6e6] Running
	I1121 14:30:12.879268  252125 system_pods.go:89] "kube-controller-manager-no-preload-921956" [5740abab-80b7-4352-8d44-40c9ad7fc713] Running
	I1121 14:30:12.879271  252125 system_pods.go:89] "kube-proxy-wmx7z" [7d5a84f9-144c-4920-a08d-478587a56498] Running
	I1121 14:30:12.879275  252125 system_pods.go:89] "kube-scheduler-no-preload-921956" [a200f6cd-f579-45e7-9f94-080ca622a30b] Running
	I1121 14:30:12.879278  252125 system_pods.go:89] "storage-provisioner" [75fb9c04-833c-4511-83c7-380f4848e49d] Running
	I1121 14:30:12.879289  252125 system_pods.go:126] duration metric: took 1.454561179s to wait for k8s-apps to be running ...
	I1121 14:30:12.879301  252125 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:30:12.879351  252125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:30:12.894061  252125 system_svc.go:56] duration metric: took 14.74714ms WaitForService to wait for kubelet
	I1121 14:30:12.894092  252125 kubeadm.go:587] duration metric: took 15.834465857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:30:12.894115  252125 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:30:12.897599  252125 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:30:12.897630  252125 node_conditions.go:123] node cpu capacity is 8
	I1121 14:30:12.897641  252125 node_conditions.go:105] duration metric: took 3.520753ms to run NodePressure ...
	I1121 14:30:12.897652  252125 start.go:242] waiting for startup goroutines ...
	I1121 14:30:12.897659  252125 start.go:247] waiting for cluster config update ...
	I1121 14:30:12.897669  252125 start.go:256] writing updated cluster config ...
	I1121 14:30:12.897983  252125 ssh_runner.go:195] Run: rm -f paused
	I1121 14:30:12.902897  252125 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:12.906562  252125 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.912263  252125 pod_ready.go:94] pod "coredns-66bc5c9577-s4rzb" is "Ready"
	I1121 14:30:12.912286  252125 pod_ready.go:86] duration metric: took 5.702456ms for pod "coredns-66bc5c9577-s4rzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.915190  252125 pod_ready.go:83] waiting for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.919870  252125 pod_ready.go:94] pod "etcd-no-preload-921956" is "Ready"
	I1121 14:30:12.919896  252125 pod_ready.go:86] duration metric: took 4.68423ms for pod "etcd-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.921926  252125 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.925984  252125 pod_ready.go:94] pod "kube-apiserver-no-preload-921956" is "Ready"
	I1121 14:30:12.926012  252125 pod_ready.go:86] duration metric: took 4.065762ms for pod "kube-apiserver-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:12.928283  252125 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.307608  252125 pod_ready.go:94] pod "kube-controller-manager-no-preload-921956" is "Ready"
	I1121 14:30:13.307639  252125 pod_ready.go:86] duration metric: took 379.335151ms for pod "kube-controller-manager-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.508229  252125 pod_ready.go:83] waiting for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:13.907070  252125 pod_ready.go:94] pod "kube-proxy-wmx7z" is "Ready"
	I1121 14:30:13.907101  252125 pod_ready.go:86] duration metric: took 398.843128ms for pod "kube-proxy-wmx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.108040  252125 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507264  252125 pod_ready.go:94] pod "kube-scheduler-no-preload-921956" is "Ready"
	I1121 14:30:14.507293  252125 pod_ready.go:86] duration metric: took 399.219492ms for pod "kube-scheduler-no-preload-921956" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:30:14.507307  252125 pod_ready.go:40] duration metric: took 1.604362709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:30:14.554506  252125 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:30:14.556366  252125 out.go:179] * Done! kubectl is now configured to use "no-preload-921956" cluster and "default" namespace by default
	I1121 14:30:14.802507  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:14.803048  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:14.803100  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:14.803156  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:14.832438  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:14.832464  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:14.832469  213058 cri.go:89] found id: ""
	I1121 14:30:14.832479  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:14.832560  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.836869  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.840970  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:14.841027  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:14.869276  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:14.869297  213058 cri.go:89] found id: ""
	I1121 14:30:14.869306  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:14.869364  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.873530  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:14.873616  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:14.902293  213058 cri.go:89] found id: ""
	I1121 14:30:14.902325  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.902336  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:14.902343  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:14.902396  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:14.931422  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:14.931444  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:14.931448  213058 cri.go:89] found id: ""
	I1121 14:30:14.931455  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:14.931507  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.936188  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:14.940673  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:14.940742  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:14.969277  213058 cri.go:89] found id: ""
	I1121 14:30:14.969308  213058 logs.go:282] 0 containers: []
	W1121 14:30:14.969320  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:14.969328  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:14.969386  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:14.999162  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:14.999190  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:14.999195  213058 cri.go:89] found id: ""
	I1121 14:30:14.999209  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:14.999275  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.003627  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:15.008044  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:15.008149  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:15.036025  213058 cri.go:89] found id: ""
	I1121 14:30:15.036050  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.036061  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:15.036069  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:15.036123  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:15.064814  213058 cri.go:89] found id: ""
	I1121 14:30:15.064840  213058 logs.go:282] 0 containers: []
	W1121 14:30:15.064851  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:15.064863  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:15.064877  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:15.105369  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:15.105412  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:15.145479  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:15.145521  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:15.186460  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:15.186498  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:15.233156  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:15.233196  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:15.328776  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:15.328824  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:15.343510  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:15.343556  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:15.375919  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:15.375959  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:15.412267  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:15.412310  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:15.467388  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:15.467422  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:15.495400  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:15.495451  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:15.527880  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:15.527906  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:15.589380  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.090626  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:18.091055  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:18.091106  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:18.091154  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:18.119750  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.119777  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:18.119781  213058 cri.go:89] found id: ""
	I1121 14:30:18.119788  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:18.119846  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.124441  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.128481  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:18.128574  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:18.155968  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:18.155990  213058 cri.go:89] found id: ""
	I1121 14:30:18.156000  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:18.156056  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.160457  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:18.160529  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:18.191869  213058 cri.go:89] found id: ""
	I1121 14:30:18.191899  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.191909  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:18.191916  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:18.191990  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:18.222614  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:18.222639  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.222644  213058 cri.go:89] found id: ""
	I1121 14:30:18.222653  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:18.222710  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.227248  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.231976  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:18.232054  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:18.261651  213058 cri.go:89] found id: ""
	I1121 14:30:18.261686  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.261696  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:18.261703  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:18.261756  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:18.293248  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:18.293277  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:18.293283  213058 cri.go:89] found id: ""
	I1121 14:30:18.293291  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:18.293360  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.297988  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:18.302375  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:18.302444  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:18.331900  213058 cri.go:89] found id: ""
	I1121 14:30:18.331976  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.331989  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:18.331997  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:18.332053  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:18.362314  213058 cri.go:89] found id: ""
	I1121 14:30:18.362341  213058 logs.go:282] 0 containers: []
	W1121 14:30:18.362351  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:18.362363  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:18.362378  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:18.401362  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:18.401403  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:18.453554  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:18.453597  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:18.470719  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:18.470750  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:18.535220  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:18.535241  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:18.535255  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:18.572460  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:18.572490  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:18.609997  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:18.610036  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:18.671215  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:18.671245  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:18.700326  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:18.700361  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:18.738576  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:18.738616  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:30:18.771440  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:18.771468  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:18.876806  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:18.876849  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:21.415623  213058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:30:21.416017  213058 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:30:21.416063  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:30:21.416112  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:30:21.454140  213058 cri.go:89] found id: "56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:21.454163  213058 cri.go:89] found id: "934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:21.454167  213058 cri.go:89] found id: ""
	I1121 14:30:21.454175  213058 logs.go:282] 2 containers: [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780]
	I1121 14:30:21.454224  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:21.460360  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:21.465894  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1121 14:30:21.465986  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:30:21.506215  213058 cri.go:89] found id: "4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:21.506251  213058 cri.go:89] found id: ""
	I1121 14:30:21.506262  213058 logs.go:282] 1 containers: [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359]
	I1121 14:30:21.506324  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:21.512116  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1121 14:30:21.512202  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:30:21.545610  213058 cri.go:89] found id: ""
	I1121 14:30:21.545640  213058 logs.go:282] 0 containers: []
	W1121 14:30:21.545651  213058 logs.go:284] No container was found matching "coredns"
	I1121 14:30:21.545659  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:30:21.545710  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:30:21.583984  213058 cri.go:89] found id: "e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:21.584009  213058 cri.go:89] found id: "f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:21.584016  213058 cri.go:89] found id: ""
	I1121 14:30:21.584027  213058 logs.go:282] 2 containers: [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545]
	I1121 14:30:21.584080  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:21.589162  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:21.593338  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:30:21.593399  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:30:21.628310  213058 cri.go:89] found id: ""
	I1121 14:30:21.628337  213058 logs.go:282] 0 containers: []
	W1121 14:30:21.628348  213058 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:30:21.628356  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:30:21.628531  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:30:21.668940  213058 cri.go:89] found id: "652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:21.668966  213058 cri.go:89] found id: "56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:21.668972  213058 cri.go:89] found id: ""
	I1121 14:30:21.668980  213058 logs.go:282] 2 containers: [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463]
	I1121 14:30:21.669040  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:21.674451  213058 ssh_runner.go:195] Run: which crictl
	I1121 14:30:21.679616  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1121 14:30:21.679674  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:30:21.712415  213058 cri.go:89] found id: ""
	I1121 14:30:21.712442  213058 logs.go:282] 0 containers: []
	W1121 14:30:21.712453  213058 logs.go:284] No container was found matching "kindnet"
	I1121 14:30:21.712460  213058 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:30:21.712511  213058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:30:21.747158  213058 cri.go:89] found id: ""
	I1121 14:30:21.747190  213058 logs.go:282] 0 containers: []
	W1121 14:30:21.747200  213058 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:30:21.747212  213058 logs.go:123] Gathering logs for kubelet ...
	I1121 14:30:21.747227  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:30:21.883051  213058 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:30:21.883085  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:30:21.962325  213058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:30:21.962342  213058 logs.go:123] Gathering logs for kube-apiserver [934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780] ...
	I1121 14:30:21.962352  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 934eed7bbf3dc6a22575be8055cd940b96038e22e5cd6f3463961a46d6046780"
	I1121 14:30:22.004437  213058 logs.go:123] Gathering logs for etcd [4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359] ...
	I1121 14:30:22.004480  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4be4eebca5559f67c3b255127a96109d74ccb373ff9909925db2fa4458e85359"
	I1121 14:30:22.042296  213058 logs.go:123] Gathering logs for kube-scheduler [e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6] ...
	I1121 14:30:22.042334  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e6dfb6e1dc1c8265272d63c384327daa0fd5fbe86ca50bd0d4f8752e8874a0b6"
	I1121 14:30:22.102267  213058 logs.go:123] Gathering logs for kube-scheduler [f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545] ...
	I1121 14:30:22.102308  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5bffdee5fed58984f6a49db6828b64640859bea1305268fef6a66c2fda74545"
	I1121 14:30:22.146815  213058 logs.go:123] Gathering logs for kube-controller-manager [652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb] ...
	I1121 14:30:22.146855  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 652f4807df85eaf29df01467f1035127421360ce721d4dee3abaffd4baf2fbcb"
	I1121 14:30:22.180355  213058 logs.go:123] Gathering logs for kube-controller-manager [56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463] ...
	I1121 14:30:22.180381  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56b18d01a7baccdb5c947bc18623de36abd1dd8bc833918c4928f4b6da860463"
	I1121 14:30:22.215084  213058 logs.go:123] Gathering logs for dmesg ...
	I1121 14:30:22.215111  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:30:22.229863  213058 logs.go:123] Gathering logs for kube-apiserver [56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324] ...
	I1121 14:30:22.229898  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56e8102371126ace3f42dda018be1e7af8b34b5b0c82b31bf229739d47944324"
	I1121 14:30:22.270974  213058 logs.go:123] Gathering logs for containerd ...
	I1121 14:30:22.271004  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1121 14:30:22.328812  213058 logs.go:123] Gathering logs for container status ...
	I1121 14:30:22.328852  213058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8c4937852627b       56cc512116c8f       9 seconds ago       Running             busybox                   0                   55e524b70455d       busybox                                     default
	f0247ece715b4       52546a367cc9e       14 seconds ago      Running             coredns                   0                   9cde47ebfdaa9       coredns-66bc5c9577-s4rzb                    kube-system
	e791a48ad06a8       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   f3b466e434694       storage-provisioner                         kube-system
	eac07ec6addf2       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   4141af88e24d8       kindnet-kf24h                               kube-system
	3dad3f2e239b1       fc25172553d79       29 seconds ago      Running             kube-proxy                0                   7397f89f7a39e       kube-proxy-wmx7z                            kube-system
	1cd8f6c5ba170       5f1f5298c888d       39 seconds ago      Running             etcd                      0                   c6ae47a54c220       etcd-no-preload-921956                      kube-system
	dceea14c3e55c       7dd6aaa1717ab       39 seconds ago      Running             kube-scheduler            0                   c7aa7d1c46c19       kube-scheduler-no-preload-921956            kube-system
	bc0261d84f559       c80c8dbafe7dd       39 seconds ago      Running             kube-controller-manager   0                   773140ae1c786       kube-controller-manager-no-preload-921956   kube-system
	1477917e1b2ba       c3994bc696102       39 seconds ago      Running             kube-apiserver            0                   9ce03a4904943       kube-apiserver-no-preload-921956            kube-system
	
	
	==> containerd <==
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.596124481Z" level=info msg="Container f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.599105052Z" level=info msg="CreateContainer within sandbox \"f3b466e43469423250b24f5b0c583a3d95b0b05abfa084da3a0674a3b91b7692\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f\""
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.599787885Z" level=info msg="StartContainer for \"e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f\""
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.600927107Z" level=info msg="connecting to shim e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f" address="unix:///run/containerd/s/bba5dae34a16be5c8ec0d6ba65f8dc232accb717c30abd045510178f2ece1097" protocol=ttrpc version=3
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.605360819Z" level=info msg="CreateContainer within sandbox \"9cde47ebfdaa9dc6352c0279f0ef10eb6bc8edbda3437a2be73e3f941df07baa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961\""
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.606044693Z" level=info msg="StartContainer for \"f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961\""
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.607233194Z" level=info msg="connecting to shim f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961" address="unix:///run/containerd/s/02eaeb044cebb741c9be7dd0480408b231479620953f130c4ea28518fb0c35e1" protocol=ttrpc version=3
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.659637183Z" level=info msg="StartContainer for \"e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f\" returns successfully"
	Nov 21 14:30:11 no-preload-921956 containerd[656]: time="2025-11-21T14:30:11.668393940Z" level=info msg="StartContainer for \"f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961\" returns successfully"
	Nov 21 14:30:15 no-preload-921956 containerd[656]: time="2025-11-21T14:30:15.034110596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:73c5bb38-ca7b-4848-93a8-0622f9c1292e,Namespace:default,Attempt:0,}"
	Nov 21 14:30:15 no-preload-921956 containerd[656]: time="2025-11-21T14:30:15.084373462Z" level=info msg="connecting to shim 55e524b70455dae1bc437f826bd01d57b2251dbf52109d5dcb25d763ab0edb06" address="unix:///run/containerd/s/4feb4fb31aa5a0c32168b8915d9839ae11cc3ce53dd9bee66d84fc9395ffbfd9" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:30:15 no-preload-921956 containerd[656]: time="2025-11-21T14:30:15.168491854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:73c5bb38-ca7b-4848-93a8-0622f9c1292e,Namespace:default,Attempt:0,} returns sandbox id \"55e524b70455dae1bc437f826bd01d57b2251dbf52109d5dcb25d763ab0edb06\""
	Nov 21 14:30:15 no-preload-921956 containerd[656]: time="2025-11-21T14:30:15.170616417Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.263294009Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.264248753Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.265905308Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.268366369Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.268840479Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.098179974s"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.268878879Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.273637858Z" level=info msg="CreateContainer within sandbox \"55e524b70455dae1bc437f826bd01d57b2251dbf52109d5dcb25d763ab0edb06\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.283313422Z" level=info msg="Container 8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.290682241Z" level=info msg="CreateContainer within sandbox \"55e524b70455dae1bc437f826bd01d57b2251dbf52109d5dcb25d763ab0edb06\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d\""
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.291389237Z" level=info msg="StartContainer for \"8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d\""
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.292566937Z" level=info msg="connecting to shim 8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d" address="unix:///run/containerd/s/4feb4fb31aa5a0c32168b8915d9839ae11cc3ce53dd9bee66d84fc9395ffbfd9" protocol=ttrpc version=3
	Nov 21 14:30:17 no-preload-921956 containerd[656]: time="2025-11-21T14:30:17.356834059Z" level=info msg="StartContainer for \"8c4937852627be2f75610b3bf01e69fa974c11e5e948a23f0ce22cead778239d\" returns successfully"
	
	
	==> coredns [f0247ece715b4958efae207e856309ce86470b495b029ec8772800dfff991961] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35123 - 15966 "HINFO IN 8318159525879143492.5771029268899257213. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016728094s
	
	
	==> describe nodes <==
	Name:               no-preload-921956
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-921956
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-921956
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_29_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-921956
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:30:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:29:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:30:21 +0000   Fri, 21 Nov 2025 14:30:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-921956
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                1dcac8a0-c5fe-4b74-ba51-ed10e93db1e4
	  Boot ID:                    f900700b-0668-4d24-87ff-85e15fbda365
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-s4rzb                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-no-preload-921956                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-kf24h                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-921956             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-921956    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-wmx7z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-921956             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node no-preload-921956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node no-preload-921956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x7 over 41s)  kubelet          Node no-preload-921956 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s                kubelet          Node no-preload-921956 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s                kubelet          Node no-preload-921956 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s                kubelet          Node no-preload-921956 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node no-preload-921956 event: Registered Node no-preload-921956 in Controller
	  Normal  NodeReady                15s                kubelet          Node no-preload-921956 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001887] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.440508] i8042: Warning: Keylock active
	[  +0.011202] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526419] block sda: the capability attribute has been deprecated.
	[  +0.095215] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027093] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.485024] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [1cd8f6c5ba170d50593b90924ece3788f3f7ca38f69386bcb4ca7460314ee602] <==
	{"level":"warn","ts":"2025-11-21T14:29:47.878895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.887804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.894410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.900867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.909263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.920479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.927845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.934193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.940976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.950726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.958627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.964786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.972333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.979064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.985577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.993352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:47.999726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.006386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.014105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.022067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.028438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.045226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.052092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.058771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:29:48.113695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56782","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:30:26 up  1:12,  0 user,  load average: 4.00, 3.08, 1.95
	Linux no-preload-921956 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eac07ec6addf2c3febabe11770b0db6eabded99628063a2320ab08d5aa9cdd49] <==
	I1121 14:30:00.835156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:30:00.835421       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1121 14:30:00.835585       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:30:00.835625       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:30:00.835654       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:30:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:30:01.041084       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:30:01.041134       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:30:01.041147       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:30:01.041272       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:30:01.432758       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:30:01.432792       1 metrics.go:72] Registering metrics
	I1121 14:30:01.432861       1 controller.go:711] "Syncing nftables rules"
	I1121 14:30:11.041897       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1121 14:30:11.041946       1 main.go:301] handling current node
	I1121 14:30:21.043666       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1121 14:30:21.043701       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1477917e1b2ba485a1dafbeed3092c99981ab3ad1049c6edfeaa40700522baa0] <==
	I1121 14:29:48.663816       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:29:48.666143       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:29:48.667239       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:48.667302       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:29:48.672625       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:48.673341       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:29:48.847047       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:29:49.552186       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:29:49.556456       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:29:49.556474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:29:50.185571       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:29:50.235452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:29:50.365412       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:29:50.377977       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1121 14:29:50.379591       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:29:50.387072       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:29:51.073317       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:29:51.535164       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:29:51.549117       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:29:51.559253       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:29:56.775670       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:29:56.826496       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:56.831297       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:29:57.025433       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1121 14:30:22.848164       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:56266: use of closed network connection
	
	
	==> kube-controller-manager [bc0261d84f559991c2c7db2cb8fe481647263c9de84272911c0785f71feff57d] <==
	I1121 14:29:56.038702       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:29:56.045087       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:29:56.047407       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:29:56.047567       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:29:56.047671       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-921956"
	I1121 14:29:56.047720       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:29:56.072292       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:29:56.072337       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:29:56.072333       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:29:56.073044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:29:56.073217       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:29:56.073249       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:29:56.073646       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:29:56.073724       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:29:56.073748       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:29:56.073860       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:29:56.073758       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:29:56.074477       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:29:56.076345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:29:56.077593       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:29:56.080104       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:29:56.081343       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:29:56.083488       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:29:56.099986       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:30:16.051796       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3dad3f2e239b136aa9dce1235e9f83bbd957833abd6ad7034e20e8959c852a1c] <==
	I1121 14:29:57.504793       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:29:57.569093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:29:57.669420       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:29:57.669502       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1121 14:29:57.669659       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:29:57.692870       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:29:57.692927       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:29:57.698501       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:29:57.698871       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:29:57.699346       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:29:57.701848       1 config.go:309] "Starting node config controller"
	I1121 14:29:57.701907       1 config.go:200] "Starting service config controller"
	I1121 14:29:57.701909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:29:57.701939       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:29:57.701958       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:29:57.701963       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:29:57.701974       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:29:57.701978       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:29:57.803028       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:29:57.803065       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:29:57.803093       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:29:57.803108       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [dceea14c3e55c1a529a35c8e722b2d06d123c9b495c35eaff2b753a6f6697b67] <==
	E1121 14:29:48.618742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:29:48.618804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:29:48.618802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:29:48.618808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:29:48.618843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:29:48.618938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:29:48.619124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:29:48.619244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:29:49.438503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 14:29:49.464267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:29:49.527091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:29:49.549569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:29:49.584100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:29:49.616890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:29:49.641312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:29:49.697686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:29:49.718105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:29:49.727889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:29:49.781417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:29:49.781426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:29:49.853146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:29:49.890894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:29:49.911488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:29:49.981052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1121 14:29:51.214830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:29:52 no-preload-921956 kubelet[2141]: I1121 14:29:52.452891    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-921956" podStartSLOduration=3.452867833 podStartE2EDuration="3.452867833s" podCreationTimestamp="2025-11-21 14:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.44283189 +0000 UTC m=+1.141650376" watchObservedRunningTime="2025-11-21 14:29:52.452867833 +0000 UTC m=+1.151686315"
	Nov 21 14:29:52 no-preload-921956 kubelet[2141]: I1121 14:29:52.463635    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-921956" podStartSLOduration=1.463617617 podStartE2EDuration="1.463617617s" podCreationTimestamp="2025-11-21 14:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.463603925 +0000 UTC m=+1.162422411" watchObservedRunningTime="2025-11-21 14:29:52.463617617 +0000 UTC m=+1.162436106"
	Nov 21 14:29:52 no-preload-921956 kubelet[2141]: I1121 14:29:52.463750    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-921956" podStartSLOduration=1.4637427920000001 podStartE2EDuration="1.463742792s" podCreationTimestamp="2025-11-21 14:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.453267749 +0000 UTC m=+1.152086236" watchObservedRunningTime="2025-11-21 14:29:52.463742792 +0000 UTC m=+1.162561278"
	Nov 21 14:29:52 no-preload-921956 kubelet[2141]: I1121 14:29:52.485271    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-921956" podStartSLOduration=1.485248201 podStartE2EDuration="1.485248201s" podCreationTimestamp="2025-11-21 14:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:52.475134404 +0000 UTC m=+1.173952890" watchObservedRunningTime="2025-11-21 14:29:52.485248201 +0000 UTC m=+1.184066687"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.068280    2141 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.069161    2141 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816659    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d5a84f9-144c-4920-a08d-478587a56498-xtables-lock\") pod \"kube-proxy-wmx7z\" (UID: \"7d5a84f9-144c-4920-a08d-478587a56498\") " pod="kube-system/kube-proxy-wmx7z"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816708    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hckm\" (UniqueName: \"kubernetes.io/projected/7d5a84f9-144c-4920-a08d-478587a56498-kube-api-access-2hckm\") pod \"kube-proxy-wmx7z\" (UID: \"7d5a84f9-144c-4920-a08d-478587a56498\") " pod="kube-system/kube-proxy-wmx7z"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816738    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c698f297-3ff4-4f90-a871-5c4c944b9e61-cni-cfg\") pod \"kindnet-kf24h\" (UID: \"c698f297-3ff4-4f90-a871-5c4c944b9e61\") " pod="kube-system/kindnet-kf24h"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816760    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c698f297-3ff4-4f90-a871-5c4c944b9e61-lib-modules\") pod \"kindnet-kf24h\" (UID: \"c698f297-3ff4-4f90-a871-5c4c944b9e61\") " pod="kube-system/kindnet-kf24h"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816781    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjfw\" (UniqueName: \"kubernetes.io/projected/c698f297-3ff4-4f90-a871-5c4c944b9e61-kube-api-access-ljjfw\") pod \"kindnet-kf24h\" (UID: \"c698f297-3ff4-4f90-a871-5c4c944b9e61\") " pod="kube-system/kindnet-kf24h"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816843    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d5a84f9-144c-4920-a08d-478587a56498-kube-proxy\") pod \"kube-proxy-wmx7z\" (UID: \"7d5a84f9-144c-4920-a08d-478587a56498\") " pod="kube-system/kube-proxy-wmx7z"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816892    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d5a84f9-144c-4920-a08d-478587a56498-lib-modules\") pod \"kube-proxy-wmx7z\" (UID: \"7d5a84f9-144c-4920-a08d-478587a56498\") " pod="kube-system/kube-proxy-wmx7z"
	Nov 21 14:29:56 no-preload-921956 kubelet[2141]: I1121 14:29:56.816948    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c698f297-3ff4-4f90-a871-5c4c944b9e61-xtables-lock\") pod \"kindnet-kf24h\" (UID: \"c698f297-3ff4-4f90-a871-5c4c944b9e61\") " pod="kube-system/kindnet-kf24h"
	Nov 21 14:29:58 no-preload-921956 kubelet[2141]: I1121 14:29:58.461118    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wmx7z" podStartSLOduration=2.46109374 podStartE2EDuration="2.46109374s" podCreationTimestamp="2025-11-21 14:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:29:58.450619586 +0000 UTC m=+7.149438070" watchObservedRunningTime="2025-11-21 14:29:58.46109374 +0000 UTC m=+7.159912228"
	Nov 21 14:30:01 no-preload-921956 kubelet[2141]: I1121 14:30:01.491757    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kf24h" podStartSLOduration=2.548497999 podStartE2EDuration="5.491739095s" podCreationTimestamp="2025-11-21 14:29:56 +0000 UTC" firstStartedPulling="2025-11-21 14:29:57.562187822 +0000 UTC m=+6.261006301" lastFinishedPulling="2025-11-21 14:30:00.505428926 +0000 UTC m=+9.204247397" observedRunningTime="2025-11-21 14:30:01.48249203 +0000 UTC m=+10.181310521" watchObservedRunningTime="2025-11-21 14:30:01.491739095 +0000 UTC m=+10.190557581"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.123299    2141 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.217716    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4941c273-72bf-49af-ad72-793444a43d21-config-volume\") pod \"coredns-66bc5c9577-s4rzb\" (UID: \"4941c273-72bf-49af-ad72-793444a43d21\") " pod="kube-system/coredns-66bc5c9577-s4rzb"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.217767    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdnbd\" (UniqueName: \"kubernetes.io/projected/4941c273-72bf-49af-ad72-793444a43d21-kube-api-access-kdnbd\") pod \"coredns-66bc5c9577-s4rzb\" (UID: \"4941c273-72bf-49af-ad72-793444a43d21\") " pod="kube-system/coredns-66bc5c9577-s4rzb"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.217792    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgngm\" (UniqueName: \"kubernetes.io/projected/75fb9c04-833c-4511-83c7-380f4848e49d-kube-api-access-xgngm\") pod \"storage-provisioner\" (UID: \"75fb9c04-833c-4511-83c7-380f4848e49d\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:11 no-preload-921956 kubelet[2141]: I1121 14:30:11.217813    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/75fb9c04-833c-4511-83c7-380f4848e49d-tmp\") pod \"storage-provisioner\" (UID: \"75fb9c04-833c-4511-83c7-380f4848e49d\") " pod="kube-system/storage-provisioner"
	Nov 21 14:30:12 no-preload-921956 kubelet[2141]: I1121 14:30:12.489077    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.489054503 podStartE2EDuration="15.489054503s" podCreationTimestamp="2025-11-21 14:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:12.488769927 +0000 UTC m=+21.187588414" watchObservedRunningTime="2025-11-21 14:30:12.489054503 +0000 UTC m=+21.187873004"
	Nov 21 14:30:14 no-preload-921956 kubelet[2141]: I1121 14:30:14.717866    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s4rzb" podStartSLOduration=17.717840588 podStartE2EDuration="17.717840588s" podCreationTimestamp="2025-11-21 14:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:30:12.499285739 +0000 UTC m=+21.198104225" watchObservedRunningTime="2025-11-21 14:30:14.717840588 +0000 UTC m=+23.416659075"
	Nov 21 14:30:14 no-preload-921956 kubelet[2141]: I1121 14:30:14.839225    2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6t8s\" (UniqueName: \"kubernetes.io/projected/73c5bb38-ca7b-4848-93a8-0622f9c1292e-kube-api-access-z6t8s\") pod \"busybox\" (UID: \"73c5bb38-ca7b-4848-93a8-0622f9c1292e\") " pod="default/busybox"
	Nov 21 14:30:17 no-preload-921956 kubelet[2141]: I1121 14:30:17.506909    2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.407150052 podStartE2EDuration="3.506888201s" podCreationTimestamp="2025-11-21 14:30:14 +0000 UTC" firstStartedPulling="2025-11-21 14:30:15.170205799 +0000 UTC m=+23.869024278" lastFinishedPulling="2025-11-21 14:30:17.269943947 +0000 UTC m=+25.968762427" observedRunningTime="2025-11-21 14:30:17.506500039 +0000 UTC m=+26.205318540" watchObservedRunningTime="2025-11-21 14:30:17.506888201 +0000 UTC m=+26.205706689"
	
	
	==> storage-provisioner [e791a48ad06a8b7b9513e1f9e2d3ca8efa6a1f6e2a87bde2ee89459cc8d4f03f] <==
	I1121 14:30:11.670154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:30:11.682926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:30:11.682992       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:30:11.688059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:11.694400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:30:11.694824       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:30:11.695142       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b2ba257-b216-4d68-8b76-44e8d620e754", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-921956_b4b04708-11b1-4a5e-aeb4-de08a1a4cf98 became leader
	I1121 14:30:11.695242       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-921956_b4b04708-11b1-4a5e-aeb4-de08a1a4cf98!
	W1121 14:30:11.698336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:11.702840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:30:11.796200       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-921956_b4b04708-11b1-4a5e-aeb4-de08a1a4cf98!
	W1121 14:30:13.706791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:13.710561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:15.713197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:15.717390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:17.721086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:17.726624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:19.730502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:19.736180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:21.740847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:21.747343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:23.751127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:23.755918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:25.759665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:30:25.765623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-921956 -n no-preload-921956
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-921956 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-013140 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9ddebfb3-80d6-4623-aa37-0e3ce0fef04f] Pending
helpers_test.go:352: "busybox" [9ddebfb3-80d6-4623-aa37-0e3ce0fef04f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9ddebfb3-80d6-4623-aa37-0e3ce0fef04f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005451827s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-013140 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-013140
helpers_test.go:243: (dbg) docker inspect embed-certs-013140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3",
	        "Created": "2025-11-21T14:31:46.141263741Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:31:46.187510679Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3/hostname",
	        "HostsPath": "/var/lib/docker/containers/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3/hosts",
	        "LogPath": "/var/lib/docker/containers/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3-json.log",
	        "Name": "/embed-certs-013140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-013140:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-013140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3",
	                "LowerDir": "/var/lib/docker/overlay2/fd941f498a6cf8c5ff5a99fd2dd988f0cc4bc487fd6f0021c002431f13528818-init/diff:/var/lib/docker/overlay2/a649757dd9587fa5a20ca8a56ec1923099f2a5e912dc7e8e1dfa08e79248b59f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd941f498a6cf8c5ff5a99fd2dd988f0cc4bc487fd6f0021c002431f13528818/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd941f498a6cf8c5ff5a99fd2dd988f0cc4bc487fd6f0021c002431f13528818/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd941f498a6cf8c5ff5a99fd2dd988f0cc4bc487fd6f0021c002431f13528818/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-013140",
	                "Source": "/var/lib/docker/volumes/embed-certs-013140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-013140",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-013140",
	                "name.minikube.sigs.k8s.io": "embed-certs-013140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "083559fae12cc0f6d45a8bada64938617602030398964146e6f624238aa2f06d",
	            "SandboxKey": "/var/run/docker/netns/083559fae12c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-013140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a37427e1a237ca51fcb15868ce9620ac7feb39d1a540dac313b302224f6cad5",
	                    "EndpointID": "af86ca8cc0e0b41aaeef87ac35ac8723f583be1717424c8f353f22ae7431544a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "3e:0e:43:b4:97:8b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-013140",
	                        "cd6ba875b6af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-013140 -n embed-certs-013140
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-013140 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-013140 logs -n 25: (1.14207777s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p default-k8s-diff-port-376255                                                                                                                                                                                                                     │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ unpause │ -p old-k8s-version-012258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p old-k8s-version-012258                                                                                                                                                                                                                           │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p default-k8s-diff-port-376255                                                                                                                                                                                                                     │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p disable-driver-mounts-088626                                                                                                                                                                                                                     │ disable-driver-mounts-088626 │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ start   │ -p embed-certs-013140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-013140           │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:32 UTC │
	│ delete  │ -p old-k8s-version-012258                                                                                                                                                                                                                           │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ start   │ -p auto-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-459127                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:32 UTC │
	│ image   │ no-preload-921956 image list --format=json                                                                                                                                                                                                          │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ pause   │ -p no-preload-921956 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ unpause │ -p no-preload-921956 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p no-preload-921956                                                                                                                                                                                                                                │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p no-preload-921956                                                                                                                                                                                                                                │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ start   │ -p kindnet-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd                                                                                                      │ kindnet-459127               │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-163061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ stop    │ -p newest-cni-163061 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-163061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ start   │ -p newest-cni-163061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ image   │ newest-cni-163061 image list --format=json                                                                                                                                                                                                          │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ pause   │ -p newest-cni-163061 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ unpause │ -p newest-cni-163061 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ delete  │ -p newest-cni-163061                                                                                                                                                                                                                                │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ delete  │ -p newest-cni-163061                                                                                                                                                                                                                                │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ start   │ -p calico-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd                                                                                                        │ calico-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │                     │
	│ ssh     │ -p auto-459127 pgrep -a kubelet                                                                                                                                                                                                                     │ auto-459127                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:32:25
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:32:25.574614  306176 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:32:25.574915  306176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:32:25.574927  306176 out.go:374] Setting ErrFile to fd 2...
	I1121 14:32:25.574938  306176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:32:25.575167  306176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:32:25.575670  306176 out.go:368] Setting JSON to false
	I1121 14:32:25.576858  306176 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4488,"bootTime":1763731058,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:32:25.576950  306176 start.go:143] virtualization: kvm guest
	I1121 14:32:25.579187  306176 out.go:179] * [calico-459127] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:32:25.581042  306176 notify.go:221] Checking for updates...
	I1121 14:32:25.581082  306176 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:32:25.582720  306176 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:32:25.584235  306176 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:32:25.585621  306176 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:32:25.587153  306176 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:32:25.588709  306176 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:32:25.591028  306176 config.go:182] Loaded profile config "auto-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:25.591193  306176 config.go:182] Loaded profile config "embed-certs-013140": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:25.591289  306176 config.go:182] Loaded profile config "kindnet-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:25.591418  306176 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:32:25.624674  306176 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:32:25.624844  306176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:32:25.704912  306176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:32:25.695049107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:32:25.705073  306176 docker.go:319] overlay module found
	I1121 14:32:25.707070  306176 out.go:179] * Using the docker driver based on user configuration
	I1121 14:32:25.708291  306176 start.go:309] selected driver: docker
	I1121 14:32:25.708308  306176 start.go:930] validating driver "docker" against <nil>
	I1121 14:32:25.708318  306176 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:32:25.708840  306176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:32:25.784750  306176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:32:25.772341931 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:32:25.784982  306176 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:32:25.785211  306176 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:32:25.787649  306176 out.go:179] * Using Docker driver with root privileges
	I1121 14:32:25.789192  306176 cni.go:84] Creating CNI manager for "calico"
	I1121 14:32:25.789218  306176 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1121 14:32:25.789342  306176 start.go:353] cluster config:
	{Name:calico-459127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:32:25.791158  306176 out.go:179] * Starting "calico-459127" primary control-plane node in "calico-459127" cluster
	I1121 14:32:25.792499  306176 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:32:25.794109  306176 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:32:25.795413  306176 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:32:25.795461  306176 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 14:32:25.795475  306176 cache.go:65] Caching tarball of preloaded images
	I1121 14:32:25.795505  306176 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:32:25.795607  306176 preload.go:238] Found /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1121 14:32:25.795627  306176 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:32:25.795796  306176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/config.json ...
	I1121 14:32:25.795830  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/config.json: {Name:mkcba83e453a390792167ca348e7a7efc2dd1ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:25.821977  306176 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:32:25.822007  306176 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:32:25.822028  306176 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:32:25.822068  306176 start.go:360] acquireMachinesLock for calico-459127: {Name:mk4243ffc2c1ce567ec0215b16e180977fa504cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:32:25.822215  306176 start.go:364] duration metric: took 123.047µs to acquireMachinesLock for "calico-459127"
	I1121 14:32:25.822252  306176 start.go:93] Provisioning new machine with config: &{Name:calico-459127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:32:25.822349  306176 start.go:125] createHost starting for "" (driver="docker")
	W1121 14:32:23.543989  290092 node_ready.go:57] node "embed-certs-013140" has "Ready":"False" status (will retry)
	I1121 14:32:24.043525  290092 node_ready.go:49] node "embed-certs-013140" is "Ready"
	I1121 14:32:24.043572  290092 node_ready.go:38] duration metric: took 12.003797013s for node "embed-certs-013140" to be "Ready" ...
	I1121 14:32:24.043589  290092 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:32:24.043635  290092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:32:24.057319  290092 api_server.go:72] duration metric: took 12.340324426s to wait for apiserver process to appear ...
	I1121 14:32:24.057348  290092 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:32:24.057371  290092 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:32:24.062890  290092 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:32:24.063908  290092 api_server.go:141] control plane version: v1.34.1
	I1121 14:32:24.063963  290092 api_server.go:131] duration metric: took 6.582168ms to wait for apiserver health ...
	I1121 14:32:24.063980  290092 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:32:24.067640  290092 system_pods.go:59] 8 kube-system pods found
	I1121 14:32:24.067680  290092 system_pods.go:61] "coredns-66bc5c9577-r95cs" [f98cd5f5-83b2-4a40-b75d-868145de6f36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:24.067705  290092 system_pods.go:61] "etcd-embed-certs-013140" [48adac7e-ce34-4f76-9d7e-f066a08d5674] Running
	I1121 14:32:24.067714  290092 system_pods.go:61] "kindnet-2dvsb" [3a733ace-4ace-47c9-b6b9-8e5f65933c49] Running
	I1121 14:32:24.067718  290092 system_pods.go:61] "kube-apiserver-embed-certs-013140" [61277438-5d91-49ac-bc7b-e6fdd718b06d] Running
	I1121 14:32:24.067722  290092 system_pods.go:61] "kube-controller-manager-embed-certs-013140" [25f8e054-6147-477c-b4c0-6d919ef9154e] Running
	I1121 14:32:24.067728  290092 system_pods.go:61] "kube-proxy-klwwh" [5a583a7c-33a2-41cf-a1f9-cf86db9bd461] Running
	I1121 14:32:24.067732  290092 system_pods.go:61] "kube-scheduler-embed-certs-013140" [dbd79435-c118-436d-9baa-dcab2d85b718] Running
	I1121 14:32:24.067739  290092 system_pods.go:61] "storage-provisioner" [9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:24.067755  290092 system_pods.go:74] duration metric: took 3.765355ms to wait for pod list to return data ...
	I1121 14:32:24.067767  290092 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:32:24.070586  290092 default_sa.go:45] found service account: "default"
	I1121 14:32:24.070610  290092 default_sa.go:55] duration metric: took 2.831061ms for default service account to be created ...
	I1121 14:32:24.070628  290092 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:32:24.073446  290092 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:24.073474  290092 system_pods.go:89] "coredns-66bc5c9577-r95cs" [f98cd5f5-83b2-4a40-b75d-868145de6f36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:24.073479  290092 system_pods.go:89] "etcd-embed-certs-013140" [48adac7e-ce34-4f76-9d7e-f066a08d5674] Running
	I1121 14:32:24.073489  290092 system_pods.go:89] "kindnet-2dvsb" [3a733ace-4ace-47c9-b6b9-8e5f65933c49] Running
	I1121 14:32:24.073493  290092 system_pods.go:89] "kube-apiserver-embed-certs-013140" [61277438-5d91-49ac-bc7b-e6fdd718b06d] Running
	I1121 14:32:24.073497  290092 system_pods.go:89] "kube-controller-manager-embed-certs-013140" [25f8e054-6147-477c-b4c0-6d919ef9154e] Running
	I1121 14:32:24.073500  290092 system_pods.go:89] "kube-proxy-klwwh" [5a583a7c-33a2-41cf-a1f9-cf86db9bd461] Running
	I1121 14:32:24.073503  290092 system_pods.go:89] "kube-scheduler-embed-certs-013140" [dbd79435-c118-436d-9baa-dcab2d85b718] Running
	I1121 14:32:24.073507  290092 system_pods.go:89] "storage-provisioner" [9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:24.073524  290092 retry.go:31] will retry after 302.452336ms: missing components: kube-dns
	I1121 14:32:24.382061  290092 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:24.382105  290092 system_pods.go:89] "coredns-66bc5c9577-r95cs" [f98cd5f5-83b2-4a40-b75d-868145de6f36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:24.382114  290092 system_pods.go:89] "etcd-embed-certs-013140" [48adac7e-ce34-4f76-9d7e-f066a08d5674] Running
	I1121 14:32:24.382122  290092 system_pods.go:89] "kindnet-2dvsb" [3a733ace-4ace-47c9-b6b9-8e5f65933c49] Running
	I1121 14:32:24.382127  290092 system_pods.go:89] "kube-apiserver-embed-certs-013140" [61277438-5d91-49ac-bc7b-e6fdd718b06d] Running
	I1121 14:32:24.382132  290092 system_pods.go:89] "kube-controller-manager-embed-certs-013140" [25f8e054-6147-477c-b4c0-6d919ef9154e] Running
	I1121 14:32:24.382142  290092 system_pods.go:89] "kube-proxy-klwwh" [5a583a7c-33a2-41cf-a1f9-cf86db9bd461] Running
	I1121 14:32:24.382148  290092 system_pods.go:89] "kube-scheduler-embed-certs-013140" [dbd79435-c118-436d-9baa-dcab2d85b718] Running
	I1121 14:32:24.382155  290092 system_pods.go:89] "storage-provisioner" [9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:24.382176  290092 retry.go:31] will retry after 240.891701ms: missing components: kube-dns
	I1121 14:32:24.627792  290092 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:24.627829  290092 system_pods.go:89] "coredns-66bc5c9577-r95cs" [f98cd5f5-83b2-4a40-b75d-868145de6f36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:24.627838  290092 system_pods.go:89] "etcd-embed-certs-013140" [48adac7e-ce34-4f76-9d7e-f066a08d5674] Running
	I1121 14:32:24.627845  290092 system_pods.go:89] "kindnet-2dvsb" [3a733ace-4ace-47c9-b6b9-8e5f65933c49] Running
	I1121 14:32:24.627849  290092 system_pods.go:89] "kube-apiserver-embed-certs-013140" [61277438-5d91-49ac-bc7b-e6fdd718b06d] Running
	I1121 14:32:24.627860  290092 system_pods.go:89] "kube-controller-manager-embed-certs-013140" [25f8e054-6147-477c-b4c0-6d919ef9154e] Running
	I1121 14:32:24.627866  290092 system_pods.go:89] "kube-proxy-klwwh" [5a583a7c-33a2-41cf-a1f9-cf86db9bd461] Running
	I1121 14:32:24.627873  290092 system_pods.go:89] "kube-scheduler-embed-certs-013140" [dbd79435-c118-436d-9baa-dcab2d85b718] Running
	I1121 14:32:24.627878  290092 system_pods.go:89] "storage-provisioner" [9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec] Running
	I1121 14:32:24.627889  290092 system_pods.go:126] duration metric: took 557.254395ms to wait for k8s-apps to be running ...
	I1121 14:32:24.627903  290092 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:32:24.627954  290092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:32:24.641532  290092 system_svc.go:56] duration metric: took 13.619346ms WaitForService to wait for kubelet
	I1121 14:32:24.641578  290092 kubeadm.go:587] duration metric: took 12.924592276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:32:24.641603  290092 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:32:24.645040  290092 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:32:24.645064  290092 node_conditions.go:123] node cpu capacity is 8
	I1121 14:32:24.645087  290092 node_conditions.go:105] duration metric: took 3.476716ms to run NodePressure ...
	I1121 14:32:24.645099  290092 start.go:242] waiting for startup goroutines ...
	I1121 14:32:24.645109  290092 start.go:247] waiting for cluster config update ...
	I1121 14:32:24.645118  290092 start.go:256] writing updated cluster config ...
	I1121 14:32:24.645370  290092 ssh_runner.go:195] Run: rm -f paused
	I1121 14:32:24.649592  290092 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:32:24.653589  290092 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r95cs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.661795  290092 pod_ready.go:94] pod "coredns-66bc5c9577-r95cs" is "Ready"
	I1121 14:32:25.661831  290092 pod_ready.go:86] duration metric: took 1.008214212s for pod "coredns-66bc5c9577-r95cs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.665095  290092 pod_ready.go:83] waiting for pod "etcd-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.674067  290092 pod_ready.go:94] pod "etcd-embed-certs-013140" is "Ready"
	I1121 14:32:25.674102  290092 pod_ready.go:86] duration metric: took 8.959279ms for pod "etcd-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.678172  290092 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.684812  290092 pod_ready.go:94] pod "kube-apiserver-embed-certs-013140" is "Ready"
	I1121 14:32:25.684903  290092 pod_ready.go:86] duration metric: took 6.621169ms for pod "kube-apiserver-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.688844  290092 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.860621  290092 pod_ready.go:94] pod "kube-controller-manager-embed-certs-013140" is "Ready"
	I1121 14:32:25.860652  290092 pod_ready.go:86] duration metric: took 171.708723ms for pod "kube-controller-manager-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.058046  290092 pod_ready.go:83] waiting for pod "kube-proxy-klwwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.458526  290092 pod_ready.go:94] pod "kube-proxy-klwwh" is "Ready"
	I1121 14:32:26.458588  290092 pod_ready.go:86] duration metric: took 400.510052ms for pod "kube-proxy-klwwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.659912  290092 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.319154  296399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:32:25.818741  296399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:32:26.319328  296399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:32:26.819253  296399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:32:26.909328  296399 kubeadm.go:1114] duration metric: took 4.760483045s to wait for elevateKubeSystemPrivileges
	I1121 14:32:26.909366  296399 kubeadm.go:403] duration metric: took 18.835973773s to StartCluster
	I1121 14:32:26.909390  296399 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:26.909471  296399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:32:26.911434  296399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:26.911715  296399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:32:26.911718  296399 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:32:26.911825  296399 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:32:26.911915  296399 addons.go:70] Setting storage-provisioner=true in profile "kindnet-459127"
	I1121 14:32:26.911929  296399 config.go:182] Loaded profile config "kindnet-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:26.911944  296399 addons.go:239] Setting addon storage-provisioner=true in "kindnet-459127"
	I1121 14:32:26.911995  296399 host.go:66] Checking if "kindnet-459127" exists ...
	I1121 14:32:26.912068  296399 addons.go:70] Setting default-storageclass=true in profile "kindnet-459127"
	I1121 14:32:26.912097  296399 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-459127"
	I1121 14:32:26.912416  296399 cli_runner.go:164] Run: docker container inspect kindnet-459127 --format={{.State.Status}}
	I1121 14:32:26.912618  296399 cli_runner.go:164] Run: docker container inspect kindnet-459127 --format={{.State.Status}}
	I1121 14:32:26.913475  296399 out.go:179] * Verifying Kubernetes components...
	I1121 14:32:26.915214  296399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:32:26.944336  296399 addons.go:239] Setting addon default-storageclass=true in "kindnet-459127"
	I1121 14:32:26.944379  296399 host.go:66] Checking if "kindnet-459127" exists ...
	I1121 14:32:26.945021  296399 cli_runner.go:164] Run: docker container inspect kindnet-459127 --format={{.State.Status}}
	I1121 14:32:26.946999  296399 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:32:27.057868  290092 pod_ready.go:94] pod "kube-scheduler-embed-certs-013140" is "Ready"
	I1121 14:32:27.057896  290092 pod_ready.go:86] duration metric: took 397.954176ms for pod "kube-scheduler-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:27.057912  290092 pod_ready.go:40] duration metric: took 2.408282324s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:32:27.120222  290092 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:32:27.121833  290092 out.go:179] * Done! kubectl is now configured to use "embed-certs-013140" cluster and "default" namespace by default
	I1121 14:32:26.948840  296399 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:32:26.948875  296399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:32:26.948980  296399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459127
	I1121 14:32:26.983191  296399 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:32:26.983218  296399 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:32:26.983291  296399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459127
	I1121 14:32:26.985952  296399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/kindnet-459127/id_rsa Username:docker}
	I1121 14:32:27.014487  296399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/kindnet-459127/id_rsa Username:docker}
	I1121 14:32:27.046782  296399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:32:27.102315  296399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:32:27.116897  296399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:32:27.138025  296399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:32:27.288845  296399 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:32:27.292022  296399 node_ready.go:35] waiting up to 15m0s for node "kindnet-459127" to be "Ready" ...
	I1121 14:32:27.555466  296399 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:32:23.819696  290817 node_ready.go:57] node "auto-459127" has "Ready":"False" status (will retry)
	I1121 14:32:25.822279  290817 node_ready.go:49] node "auto-459127" is "Ready"
	I1121 14:32:25.822304  290817 node_ready.go:38] duration metric: took 11.00663893s for node "auto-459127" to be "Ready" ...
	I1121 14:32:25.822321  290817 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:32:25.822385  290817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:32:25.841862  290817 api_server.go:72] duration metric: took 11.686182135s to wait for apiserver process to appear ...
	I1121 14:32:25.841894  290817 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:32:25.841917  290817 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:32:25.847430  290817 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:32:25.848681  290817 api_server.go:141] control plane version: v1.34.1
	I1121 14:32:25.848711  290817 api_server.go:131] duration metric: took 6.809354ms to wait for apiserver health ...
	I1121 14:32:25.848722  290817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:32:25.855426  290817 system_pods.go:59] 8 kube-system pods found
	I1121 14:32:25.855482  290817 system_pods.go:61] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:25.855494  290817 system_pods.go:61] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:25.855509  290817 system_pods.go:61] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:25.855521  290817 system_pods.go:61] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:25.855532  290817 system_pods.go:61] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:25.855614  290817 system_pods.go:61] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:25.855637  290817 system_pods.go:61] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:25.856625  290817 system_pods.go:61] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:25.856645  290817 system_pods.go:74] duration metric: took 7.914135ms to wait for pod list to return data ...
	I1121 14:32:25.856659  290817 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:32:25.860414  290817 default_sa.go:45] found service account: "default"
	I1121 14:32:25.860450  290817 default_sa.go:55] duration metric: took 3.776975ms for default service account to be created ...
	I1121 14:32:25.860461  290817 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:32:25.868796  290817 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:25.868837  290817 system_pods.go:89] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:25.868845  290817 system_pods.go:89] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:25.868853  290817 system_pods.go:89] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:25.868858  290817 system_pods.go:89] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:25.868863  290817 system_pods.go:89] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:25.868868  290817 system_pods.go:89] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:25.868873  290817 system_pods.go:89] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:25.868880  290817 system_pods.go:89] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:25.868906  290817 retry.go:31] will retry after 209.914619ms: missing components: kube-dns
	I1121 14:32:26.083950  290817 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:26.084003  290817 system_pods.go:89] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:26.084014  290817 system_pods.go:89] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:26.084024  290817 system_pods.go:89] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:26.084032  290817 system_pods.go:89] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:26.084039  290817 system_pods.go:89] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:26.084056  290817 system_pods.go:89] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:26.084063  290817 system_pods.go:89] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:26.084076  290817 system_pods.go:89] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:26.084101  290817 retry.go:31] will retry after 381.085812ms: missing components: kube-dns
	I1121 14:32:26.469503  290817 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:26.469569  290817 system_pods.go:89] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:26.469581  290817 system_pods.go:89] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:26.469591  290817 system_pods.go:89] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:26.469599  290817 system_pods.go:89] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:26.469607  290817 system_pods.go:89] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:26.469614  290817 system_pods.go:89] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:26.469620  290817 system_pods.go:89] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:26.469629  290817 system_pods.go:89] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:26.469653  290817 retry.go:31] will retry after 426.864838ms: missing components: kube-dns
	I1121 14:32:26.901250  290817 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:26.901284  290817 system_pods.go:89] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Running
	I1121 14:32:26.901295  290817 system_pods.go:89] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:26.901300  290817 system_pods.go:89] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:26.901305  290817 system_pods.go:89] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:26.901311  290817 system_pods.go:89] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:26.901317  290817 system_pods.go:89] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:26.901325  290817 system_pods.go:89] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:26.901330  290817 system_pods.go:89] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Running
	I1121 14:32:26.901340  290817 system_pods.go:126] duration metric: took 1.040871748s to wait for k8s-apps to be running ...
	I1121 14:32:26.901354  290817 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:32:26.901401  290817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:32:26.918330  290817 system_svc.go:56] duration metric: took 16.951306ms WaitForService to wait for kubelet
	I1121 14:32:26.918370  290817 kubeadm.go:587] duration metric: took 12.762695032s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:32:26.918395  290817 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:32:26.925066  290817 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:32:26.925098  290817 node_conditions.go:123] node cpu capacity is 8
	I1121 14:32:26.925187  290817 node_conditions.go:105] duration metric: took 6.784662ms to run NodePressure ...
	I1121 14:32:26.925220  290817 start.go:242] waiting for startup goroutines ...
	I1121 14:32:26.925230  290817 start.go:247] waiting for cluster config update ...
	I1121 14:32:26.925281  290817 start.go:256] writing updated cluster config ...
	I1121 14:32:26.925733  290817 ssh_runner.go:195] Run: rm -f paused
	I1121 14:32:26.934635  290817 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:32:26.939829  290817 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bqr8h" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.947802  290817 pod_ready.go:94] pod "coredns-66bc5c9577-bqr8h" is "Ready"
	I1121 14:32:26.947835  290817 pod_ready.go:86] duration metric: took 7.969545ms for pod "coredns-66bc5c9577-bqr8h" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.951820  290817 pod_ready.go:83] waiting for pod "etcd-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.962526  290817 pod_ready.go:94] pod "etcd-auto-459127" is "Ready"
	I1121 14:32:26.962645  290817 pod_ready.go:86] duration metric: took 10.796396ms for pod "etcd-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.967673  290817 pod_ready.go:83] waiting for pod "kube-apiserver-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.980976  290817 pod_ready.go:94] pod "kube-apiserver-auto-459127" is "Ready"
	I1121 14:32:26.981007  290817 pod_ready.go:86] duration metric: took 13.150715ms for pod "kube-apiserver-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.986485  290817 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:27.340483  290817 pod_ready.go:94] pod "kube-controller-manager-auto-459127" is "Ready"
	I1121 14:32:27.340513  290817 pod_ready.go:86] duration metric: took 354.007416ms for pod "kube-controller-manager-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:27.541234  290817 pod_ready.go:83] waiting for pod "kube-proxy-2n8t9" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:27.940047  290817 pod_ready.go:94] pod "kube-proxy-2n8t9" is "Ready"
	I1121 14:32:27.940075  290817 pod_ready.go:86] duration metric: took 398.804636ms for pod "kube-proxy-2n8t9" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:28.140279  290817 pod_ready.go:83] waiting for pod "kube-scheduler-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:28.540502  290817 pod_ready.go:94] pod "kube-scheduler-auto-459127" is "Ready"
	I1121 14:32:28.540527  290817 pod_ready.go:86] duration metric: took 400.223189ms for pod "kube-scheduler-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:28.540578  290817 pod_ready.go:40] duration metric: took 1.605845616s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:32:28.591849  290817 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:32:28.594267  290817 out.go:179] * Done! kubectl is now configured to use "auto-459127" cluster and "default" namespace by default
	I1121 14:32:27.556937  296399 addons.go:530] duration metric: took 645.10611ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:32:27.794789  296399 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-459127" context rescaled to 1 replicas
	W1121 14:32:29.295804  296399 node_ready.go:57] node "kindnet-459127" has "Ready":"False" status (will retry)
	I1121 14:32:25.828660  306176 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:32:25.829043  306176 start.go:159] libmachine.API.Create for "calico-459127" (driver="docker")
	I1121 14:32:25.829092  306176 client.go:173] LocalClient.Create starting
	I1121 14:32:25.829185  306176 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem
	I1121 14:32:25.829222  306176 main.go:143] libmachine: Decoding PEM data...
	I1121 14:32:25.829239  306176 main.go:143] libmachine: Parsing certificate...
	I1121 14:32:25.829324  306176 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem
	I1121 14:32:25.829350  306176 main.go:143] libmachine: Decoding PEM data...
	I1121 14:32:25.829364  306176 main.go:143] libmachine: Parsing certificate...
	I1121 14:32:25.829876  306176 cli_runner.go:164] Run: docker network inspect calico-459127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:32:25.853972  306176 cli_runner.go:211] docker network inspect calico-459127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:32:25.854071  306176 network_create.go:284] running [docker network inspect calico-459127] to gather additional debugging logs...
	I1121 14:32:25.854094  306176 cli_runner.go:164] Run: docker network inspect calico-459127
	W1121 14:32:25.882020  306176 cli_runner.go:211] docker network inspect calico-459127 returned with exit code 1
	I1121 14:32:25.882059  306176 network_create.go:287] error running [docker network inspect calico-459127]: docker network inspect calico-459127: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-459127 not found
	I1121 14:32:25.882092  306176 network_create.go:289] output of [docker network inspect calico-459127]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-459127 not found
	
	** /stderr **
	I1121 14:32:25.882216  306176 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:32:25.907166  306176 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
	I1121 14:32:25.907971  306176 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39921db0d513 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:e4:85:98:a5:e3} reservation:<nil>}
	I1121 14:32:25.908985  306176 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-36a8741c90a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:21:99:72:63:4a} reservation:<nil>}
	I1121 14:32:25.911137  306176 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001de39e0}
	I1121 14:32:25.911175  306176 network_create.go:124] attempt to create docker network calico-459127 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:32:25.911233  306176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-459127 calico-459127
	I1121 14:32:25.977496  306176 network_create.go:108] docker network calico-459127 192.168.76.0/24 created
	I1121 14:32:25.977534  306176 kic.go:121] calculated static IP "192.168.76.2" for the "calico-459127" container
	I1121 14:32:25.977667  306176 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:32:25.999630  306176 cli_runner.go:164] Run: docker volume create calico-459127 --label name.minikube.sigs.k8s.io=calico-459127 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:32:26.020859  306176 oci.go:103] Successfully created a docker volume calico-459127
	I1121 14:32:26.020942  306176 cli_runner.go:164] Run: docker run --rm --name calico-459127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-459127 --entrypoint /usr/bin/test -v calico-459127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:32:26.451151  306176 oci.go:107] Successfully prepared a docker volume calico-459127
	I1121 14:32:26.451255  306176 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:32:26.451269  306176 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:32:26.451375  306176 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-459127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:32:31.334283  306176 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-459127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.882832271s)
	I1121 14:32:31.334325  306176 kic.go:203] duration metric: took 4.883051074s to extract preloaded images to volume ...
	W1121 14:32:31.334427  306176 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:32:31.334480  306176 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:32:31.334528  306176 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:32:31.401637  306176 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-459127 --name calico-459127 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-459127 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-459127 --network calico-459127 --ip 192.168.76.2 --volume calico-459127:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:32:31.768395  306176 cli_runner.go:164] Run: docker container inspect calico-459127 --format={{.State.Running}}
	I1121 14:32:31.790739  306176 cli_runner.go:164] Run: docker container inspect calico-459127 --format={{.State.Status}}
	I1121 14:32:31.812630  306176 cli_runner.go:164] Run: docker exec calico-459127 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:32:31.864206  306176 oci.go:144] the created container "calico-459127" has a running status.
	I1121 14:32:31.864245  306176 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa...
	I1121 14:32:31.998765  306176 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:32:32.031738  306176 cli_runner.go:164] Run: docker container inspect calico-459127 --format={{.State.Status}}
	I1121 14:32:32.059870  306176 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:32:32.059895  306176 kic_runner.go:114] Args: [docker exec --privileged calico-459127 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:32:32.112824  306176 cli_runner.go:164] Run: docker container inspect calico-459127 --format={{.State.Status}}
	I1121 14:32:32.145315  306176 machine.go:94] provisionDockerMachine start ...
	I1121 14:32:32.145425  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:32.175423  306176 main.go:143] libmachine: Using SSH client type: native
	I1121 14:32:32.175891  306176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1121 14:32:32.175942  306176 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:32:32.340517  306176 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-459127
	
	I1121 14:32:32.340580  306176 ubuntu.go:182] provisioning hostname "calico-459127"
	I1121 14:32:32.340658  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:32.364531  306176 main.go:143] libmachine: Using SSH client type: native
	I1121 14:32:32.364863  306176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1121 14:32:32.364885  306176 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-459127 && echo "calico-459127" | sudo tee /etc/hostname
	I1121 14:32:32.525073  306176 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-459127
	
	I1121 14:32:32.525164  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:32.549227  306176 main.go:143] libmachine: Using SSH client type: native
	I1121 14:32:32.549502  306176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1121 14:32:32.549524  306176 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-459127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-459127/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-459127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:32:32.698003  306176 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:32:32.698037  306176 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:32:32.698086  306176 ubuntu.go:190] setting up certificates
	I1121 14:32:32.698097  306176 provision.go:84] configureAuth start
	I1121 14:32:32.698182  306176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-459127
	I1121 14:32:32.722163  306176 provision.go:143] copyHostCerts
	I1121 14:32:32.722240  306176 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:32:32.722254  306176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:32:32.722333  306176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:32:32.722477  306176 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:32:32.722490  306176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:32:32.722531  306176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:32:32.722650  306176 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:32:32.722662  306176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:32:32.722697  306176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:32:32.722783  306176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.calico-459127 san=[127.0.0.1 192.168.76.2 calico-459127 localhost minikube]
	I1121 14:32:33.334931  306176 provision.go:177] copyRemoteCerts
	I1121 14:32:33.334993  306176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:32:33.335029  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.352615  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.452282  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:32:33.474393  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 14:32:33.494035  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:32:33.512800  306176 provision.go:87] duration metric: took 814.686907ms to configureAuth
	I1121 14:32:33.512847  306176 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:32:33.513050  306176 config.go:182] Loaded profile config "calico-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:33.513063  306176 machine.go:97] duration metric: took 1.367725245s to provisionDockerMachine
	I1121 14:32:33.513070  306176 client.go:176] duration metric: took 7.683971643s to LocalClient.Create
	I1121 14:32:33.513090  306176 start.go:167] duration metric: took 7.684053907s to libmachine.API.Create "calico-459127"
	I1121 14:32:33.513103  306176 start.go:293] postStartSetup for "calico-459127" (driver="docker")
	I1121 14:32:33.513114  306176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:32:33.513178  306176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:32:33.513220  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.533670  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.639457  306176 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:32:33.644098  306176 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:32:33.644136  306176 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:32:33.644157  306176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:32:33.644209  306176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:32:33.644312  306176 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:32:33.644438  306176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:32:33.653265  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:32:33.678710  306176 start.go:296] duration metric: took 165.592945ms for postStartSetup
	I1121 14:32:33.679032  306176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-459127
	I1121 14:32:33.698668  306176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/config.json ...
	I1121 14:32:33.699049  306176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:32:33.699099  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.719149  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.814333  306176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:32:33.819681  306176 start.go:128] duration metric: took 7.997311178s to createHost
	I1121 14:32:33.819712  306176 start.go:83] releasing machines lock for "calico-459127", held for 7.99748051s
	I1121 14:32:33.819788  306176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-459127
	I1121 14:32:33.839243  306176 ssh_runner.go:195] Run: cat /version.json
	I1121 14:32:33.839296  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.839308  306176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:32:33.839381  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.858978  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.860292  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.953331  306176 ssh_runner.go:195] Run: systemctl --version
	I1121 14:32:34.012582  306176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:32:34.017743  306176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:32:34.017798  306176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:32:34.045369  306176 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:32:34.045398  306176 start.go:496] detecting cgroup driver to use...
	I1121 14:32:34.045435  306176 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:32:34.045637  306176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:32:34.060777  306176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:32:34.074963  306176 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:32:34.075024  306176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:32:34.092095  306176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:32:34.110941  306176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:32:34.192979  306176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:32:34.299046  306176 docker.go:234] disabling docker service ...
	I1121 14:32:34.299105  306176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:32:34.319462  306176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:32:34.332669  306176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:32:34.417677  306176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:32:34.507275  306176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:32:34.521363  306176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:32:34.537081  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:32:34.548441  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:32:34.559336  306176 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:32:34.559400  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:32:34.569217  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:32:34.579343  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:32:34.589163  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:32:34.599142  306176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:32:34.608443  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:32:34.617869  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:32:34.627889  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:32:34.637745  306176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:32:34.646093  306176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:32:34.654819  306176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:32:34.740746  306176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:32:34.837769  306176 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:32:34.837826  306176 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:32:34.842128  306176 start.go:564] Will wait 60s for crictl version
	I1121 14:32:34.842177  306176 ssh_runner.go:195] Run: which crictl
	I1121 14:32:34.845822  306176 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:32:34.872806  306176 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:32:34.872870  306176 ssh_runner.go:195] Run: containerd --version
	I1121 14:32:34.896107  306176 ssh_runner.go:195] Run: containerd --version
	I1121 14:32:34.920572  306176 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1121 14:32:31.315006  296399 node_ready.go:57] node "kindnet-459127" has "Ready":"False" status (will retry)
	W1121 14:32:33.795994  296399 node_ready.go:57] node "kindnet-459127" has "Ready":"False" status (will retry)
	I1121 14:32:34.921960  306176 cli_runner.go:164] Run: docker network inspect calico-459127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:32:34.940365  306176 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:32:34.945035  306176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:32:34.956162  306176 kubeadm.go:884] updating cluster {Name:calico-459127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:32:34.956374  306176 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:32:34.956492  306176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:32:34.984067  306176 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:32:34.984091  306176 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:32:34.984150  306176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:32:35.011607  306176 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:32:35.011632  306176 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:32:35.011640  306176 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1121 14:32:35.011722  306176 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-459127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1121 14:32:35.011783  306176 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:32:35.039216  306176 cni.go:84] Creating CNI manager for "calico"
	I1121 14:32:35.039243  306176 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:32:35.039265  306176 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-459127 NodeName:calico-459127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:32:35.039421  306176 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-459127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:32:35.039479  306176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:32:35.048351  306176 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:32:35.048417  306176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:32:35.057236  306176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1121 14:32:35.071942  306176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:32:35.088624  306176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1121 14:32:35.102148  306176 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:32:35.106208  306176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:32:35.116713  306176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:32:35.200255  306176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:32:35.224100  306176 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127 for IP: 192.168.76.2
	I1121 14:32:35.224125  306176 certs.go:195] generating shared ca certs ...
	I1121 14:32:35.224146  306176 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.224303  306176 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:32:35.224362  306176 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:32:35.224376  306176 certs.go:257] generating profile certs ...
	I1121 14:32:35.224427  306176 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.key
	I1121 14:32:35.224440  306176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.crt with IP's: []
	I1121 14:32:35.568044  306176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.crt ...
	I1121 14:32:35.568073  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.crt: {Name:mk450d484243924f14a08823e015fad0352b4312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.568260  306176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.key ...
	I1121 14:32:35.568272  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.key: {Name:mk060861d7049505ad8aec0ac68a7c5386c7739f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.568350  306176 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key.d2e425c6
	I1121 14:32:35.568365  306176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt.d2e425c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:32:35.695298  306176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt.d2e425c6 ...
	I1121 14:32:35.695327  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt.d2e425c6: {Name:mke66ab9acdc743d0c116c2b3d4cb6372025668b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.695497  306176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key.d2e425c6 ...
	I1121 14:32:35.695510  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key.d2e425c6: {Name:mk091aa1ce01c8419773da7bc1cd95a9840ca4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.695612  306176 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt.d2e425c6 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt
	I1121 14:32:35.695713  306176 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key.d2e425c6 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key
	I1121 14:32:35.695798  306176 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.key
	I1121 14:32:35.695813  306176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.crt with IP's: []
	I1121 14:32:35.914666  306176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.crt ...
	I1121 14:32:35.914692  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.crt: {Name:mk1ece06d080c593025e2487478e531af9572914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.914878  306176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.key ...
	I1121 14:32:35.914901  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.key: {Name:mk6dc3ecb5ea0fcd797e8da375dce43678c5a603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.915115  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:32:35.915163  306176 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:32:35.915178  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:32:35.915209  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:32:35.915267  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:32:35.915310  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:32:35.915368  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:32:35.915994  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:32:35.935759  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:32:35.956630  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:32:35.975642  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:32:35.996827  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 14:32:36.018106  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:32:36.037653  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:32:36.056435  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:32:36.074680  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:32:36.097219  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:32:36.115873  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:32:36.134651  306176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:32:36.148459  306176 ssh_runner.go:195] Run: openssl version
	I1121 14:32:36.154894  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:32:36.165274  306176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:32:36.169384  306176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:32:36.169440  306176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:32:36.206052  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:32:36.215812  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:32:36.226243  306176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:32:36.230447  306176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:32:36.230534  306176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:32:36.268383  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:32:36.278059  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:32:36.287506  306176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:32:36.291729  306176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:32:36.291794  306176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:32:36.329898  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:32:36.339754  306176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:32:36.343759  306176 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:32:36.343816  306176 kubeadm.go:401] StartCluster: {Name:calico-459127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:32:36.343873  306176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:32:36.343932  306176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:32:36.373595  306176 cri.go:89] found id: ""
	I1121 14:32:36.373667  306176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:32:36.382781  306176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:32:36.391811  306176 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:32:36.391866  306176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:32:36.401122  306176 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:32:36.401181  306176 kubeadm.go:158] found existing configuration files:
	
	I1121 14:32:36.401255  306176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:32:36.410119  306176 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:32:36.410182  306176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:32:36.418399  306176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:32:36.426323  306176 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:32:36.426373  306176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:32:36.434102  306176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:32:36.442326  306176 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:32:36.442395  306176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:32:36.450273  306176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:32:36.459365  306176 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:32:36.459473  306176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:32:36.468397  306176 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:32:36.509590  306176 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:32:36.509666  306176 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:32:36.532047  306176 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:32:36.532134  306176 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:32:36.532196  306176 kubeadm.go:319] OS: Linux
	I1121 14:32:36.532271  306176 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:32:36.532354  306176 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:32:36.532423  306176 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:32:36.532512  306176 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:32:36.532609  306176 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:32:36.532688  306176 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:32:36.532757  306176 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:32:36.532832  306176 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:32:36.597067  306176 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:32:36.597182  306176 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:32:36.597351  306176 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:32:36.603106  306176 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d591e340d2f0e       56cc512116c8f       7 seconds ago       Running             busybox                   0                   d10dadceab076       busybox                                      default
	9d688faa4a188       52546a367cc9e       14 seconds ago      Running             coredns                   0                   5c225eaea852f       coredns-66bc5c9577-r95cs                     kube-system
	b268167d64766       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   aea03caae917a       storage-provisioner                          kube-system
	57528421409cb       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   e5e5a727a1d24       kindnet-2dvsb                                kube-system
	cc3fa030dc8be       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   1955f297de7b9       kube-proxy-klwwh                             kube-system
	c28cd1c81ac68       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   0d62e1f70e929       kube-apiserver-embed-certs-013140            kube-system
	54e28dda6c675       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   40473058033b7       kube-scheduler-embed-certs-013140            kube-system
	39548d39886f2       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   e1d73120044bd       etcd-embed-certs-013140                      kube-system
	9a9eb51d990bc       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   1c4e9d3d48ab8       kube-controller-manager-embed-certs-013140   kube-system
	
	
	==> containerd <==
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.451399101Z" level=info msg="Container 9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.455635025Z" level=info msg="CreateContainer within sandbox \"aea03caae917afbb82795884d9216af32e8ad7de44695d9a4d107f60a478850b\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63\""
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.456275761Z" level=info msg="StartContainer for \"b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63\""
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.457145999Z" level=info msg="connecting to shim b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63" address="unix:///run/containerd/s/d372fb2bd22e97deb83f175bef45597e72b15123e7ac7e32e450b069e72f695d" protocol=ttrpc version=3
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.461046130Z" level=info msg="CreateContainer within sandbox \"5c225eaea852fa20c561288b44ce385c61d6ebf4a727c575091efca1a9519abb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c\""
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.461755938Z" level=info msg="StartContainer for \"9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c\""
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.462825411Z" level=info msg="connecting to shim 9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c" address="unix:///run/containerd/s/69dfb2196f79cd589449f2d5acdec8f7e0ed51201aebbfe6c8e86b40fa6ef1a0" protocol=ttrpc version=3
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.518215943Z" level=info msg="StartContainer for \"b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63\" returns successfully"
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.525036985Z" level=info msg="StartContainer for \"9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c\" returns successfully"
	Nov 21 14:32:27 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:27.678623846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9ddebfb3-80d6-4623-aa37-0e3ce0fef04f,Namespace:default,Attempt:0,}"
	Nov 21 14:32:27 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:27.732313516Z" level=info msg="connecting to shim d10dadceab076784fbbf1d28eebe46e3b6ea7c6c5838d5380ccb30b746fa4e23" address="unix:///run/containerd/s/246f7f99563f34899197939e2c7996653dd5fc957b9f545ff16ca4b9a5c44f3f" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:32:27 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:27.817652828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9ddebfb3-80d6-4623-aa37-0e3ce0fef04f,Namespace:default,Attempt:0,} returns sandbox id \"d10dadceab076784fbbf1d28eebe46e3b6ea7c6c5838d5380ccb30b746fa4e23\""
	Nov 21 14:32:27 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:27.820444074Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:32:30 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:30.948955599Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.078920473Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.086700493Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.099055018Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.099761890Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 3.279261805s"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.099806084Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.310194796Z" level=info msg="CreateContainer within sandbox \"d10dadceab076784fbbf1d28eebe46e3b6ea7c6c5838d5380ccb30b746fa4e23\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.324627782Z" level=info msg="Container d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.332198529Z" level=info msg="CreateContainer within sandbox \"d10dadceab076784fbbf1d28eebe46e3b6ea7c6c5838d5380ccb30b746fa4e23\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96\""
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.333018874Z" level=info msg="StartContainer for \"d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96\""
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.333996419Z" level=info msg="connecting to shim d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96" address="unix:///run/containerd/s/246f7f99563f34899197939e2c7996653dd5fc957b9f545ff16ca4b9a5c44f3f" protocol=ttrpc version=3
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.398322649Z" level=info msg="StartContainer for \"d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96\" returns successfully"
	
	
	==> coredns [9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38664 - 27823 "HINFO IN 1053236482022564747.6316176946796434392. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023106102s
	
	
	==> describe nodes <==
	Name:               embed-certs-013140
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-013140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-013140
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_32_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:32:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-013140
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:32:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:32:38 +0000   Fri, 21 Nov 2025 14:32:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:32:38 +0000   Fri, 21 Nov 2025 14:32:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:32:38 +0000   Fri, 21 Nov 2025 14:32:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:32:38 +0000   Fri, 21 Nov 2025 14:32:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-013140
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                3e3eb59d-aa90-4836-9a30-3112c0cfe78d
	  Boot ID:                    f900700b-0668-4d24-87ff-85e15fbda365
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-r95cs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-013140                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-2dvsb                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-013140             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-013140    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-klwwh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-013140             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node embed-certs-013140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node embed-certs-013140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node embed-certs-013140 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node embed-certs-013140 event: Registered Node embed-certs-013140 in Controller
	  Normal  NodeReady                15s   kubelet          Node embed-certs-013140 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001887] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.440508] i8042: Warning: Keylock active
	[  +0.011202] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526419] block sda: the capability attribute has been deprecated.
	[  +0.095215] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027093] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.485024] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [39548d39886f2abc13857ea6d7e4107c5a04f203dfd462aaa6a28aaeafe921d8] <==
	{"level":"warn","ts":"2025-11-21T14:32:03.531488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.545195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.556379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.570037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.583259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.597073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.612311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.620636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.629035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.640249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.652416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.662706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.674097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.682637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.693485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.700167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.716996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.734380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.744205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.766928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.774944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.784893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.879993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:32:30.696496Z","caller":"traceutil/trace.go:172","msg":"trace[296025383] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"112.432783ms","start":"2025-11-21T14:32:30.584035Z","end":"2025-11-21T14:32:30.696468Z","steps":["trace[296025383] 'process raft request'  (duration: 112.293378ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:32:31.309673Z","caller":"traceutil/trace.go:172","msg":"trace[2002741528] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"206.67551ms","start":"2025-11-21T14:32:31.102977Z","end":"2025-11-21T14:32:31.309653Z","steps":["trace[2002741528] 'process raft request'  (duration: 206.49746ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:32:38 up  1:15,  0 user,  load average: 7.59, 4.51, 2.60
	Linux embed-certs-013140 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [57528421409cbadeb4ad18e0303c003d7c895e53c564ffdfd2782a8ab1d94fcb] <==
	I1121 14:32:13.724409       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:32:13.724768       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:32:13.724913       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:32:13.724932       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:32:13.724960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:32:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:32:13.929945       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:32:13.930001       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:32:13.930014       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:32:13.930424       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:32:14.330329       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:32:14.330373       1 metrics.go:72] Registering metrics
	I1121 14:32:14.330439       1 controller.go:711] "Syncing nftables rules"
	I1121 14:32:23.935644       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:32:23.935726       1 main.go:301] handling current node
	I1121 14:32:33.930162       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:32:33.930231       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c28cd1c81ac68983e505ac429c0cca2766edfad182ab4d03de412efd4de8c0dc] <==
	I1121 14:32:04.712101       1 policy_source.go:240] refreshing policies
	I1121 14:32:04.754340       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:32:04.766374       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:32:04.766435       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:32:04.783248       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:32:04.795048       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:32:04.893274       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:32:05.556567       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:32:05.562137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:32:05.562163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:32:06.353138       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:32:06.402664       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:32:06.461828       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:32:06.469117       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:32:06.470535       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:32:06.475423       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:32:06.592087       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:32:07.542169       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:32:07.556112       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:32:07.566259       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:32:12.294249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:32:12.493882       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:32:12.596820       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:32:12.602122       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1121 14:32:37.477688       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:57386: use of closed network connection
	
	
	==> kube-controller-manager [9a9eb51d990bc2e4a764df8db8231e9787d888f74bc19b7b106cfd760e0c6af8] <==
	I1121 14:32:11.589797       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:32:11.589812       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:32:11.589855       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:32:11.589984       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:32:11.590039       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:32:11.590061       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:32:11.590073       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:32:11.590325       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:32:11.590672       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:32:11.590702       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:32:11.590704       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:32:11.591376       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:32:11.591479       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:32:11.592710       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:32:11.592730       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:32:11.592776       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:32:11.594922       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:32:11.594949       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:32:11.595001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:32:11.600213       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:32:11.600327       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:32:11.600462       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-013140"
	I1121 14:32:11.600522       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:32:11.619243       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:32:26.603399       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cc3fa030dc8becef2dcdc972ffd0ba9cb33d830c81de3f653c5f4ebd31c86d22] <==
	I1121 14:32:13.141810       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:32:13.211553       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:32:13.313201       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:32:13.313249       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:32:13.313372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:32:13.341921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:32:13.341989       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:32:13.347861       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:32:13.348290       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:32:13.348330       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:32:13.352176       1 config.go:200] "Starting service config controller"
	I1121 14:32:13.352253       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:32:13.352322       1 config.go:309] "Starting node config controller"
	I1121 14:32:13.352404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:32:13.352406       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:32:13.352411       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:32:13.352417       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:32:13.352427       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:32:13.352433       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:32:13.452526       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:32:13.452578       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:32:13.452607       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [54e28dda6c6755422e5eedf01330c92fd943dbaf8692fc68be473166adf0d43c] <==
	E1121 14:32:04.681593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:32:04.685894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:32:04.686015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:32:04.686080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:32:04.686139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:32:04.687128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:32:04.688318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:32:04.690866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:32:05.492583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:32:05.492961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:32:05.621886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:32:05.642133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:32:05.677709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:32:05.713028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:32:05.748368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:32:05.761303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:32:05.774744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:32:05.792197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:32:05.794825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:32:05.833145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:32:05.911750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:32:05.973812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:32:06.089054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:32:06.227361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1121 14:32:09.051730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:32:08 embed-certs-013140 kubelet[1495]: I1121 14:32:08.539052    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-013140" podStartSLOduration=1.539029025 podStartE2EDuration="1.539029025s" podCreationTimestamp="2025-11-21 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:08.53414193 +0000 UTC m=+1.187038139" watchObservedRunningTime="2025-11-21 14:32:08.539029025 +0000 UTC m=+1.191925229"
	Nov 21 14:32:08 embed-certs-013140 kubelet[1495]: I1121 14:32:08.573151    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-013140" podStartSLOduration=1.573123026 podStartE2EDuration="1.573123026s" podCreationTimestamp="2025-11-21 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:08.571715125 +0000 UTC m=+1.224611332" watchObservedRunningTime="2025-11-21 14:32:08.573123026 +0000 UTC m=+1.226019230"
	Nov 21 14:32:08 embed-certs-013140 kubelet[1495]: I1121 14:32:08.573376    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-013140" podStartSLOduration=1.573367548 podStartE2EDuration="1.573367548s" podCreationTimestamp="2025-11-21 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:08.554406099 +0000 UTC m=+1.207302309" watchObservedRunningTime="2025-11-21 14:32:08.573367548 +0000 UTC m=+1.226263759"
	Nov 21 14:32:11 embed-certs-013140 kubelet[1495]: I1121 14:32:11.634929    1495 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:32:11 embed-certs-013140 kubelet[1495]: I1121 14:32:11.635618    1495 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:32:11 embed-certs-013140 kubelet[1495]: I1121 14:32:11.704389    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-013140" podStartSLOduration=4.70437242 podStartE2EDuration="4.70437242s" podCreationTimestamp="2025-11-21 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:08.586823066 +0000 UTC m=+1.239719292" watchObservedRunningTime="2025-11-21 14:32:11.70437242 +0000 UTC m=+4.357268629"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574316    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a583a7c-33a2-41cf-a1f9-cf86db9bd461-kube-proxy\") pod \"kube-proxy-klwwh\" (UID: \"5a583a7c-33a2-41cf-a1f9-cf86db9bd461\") " pod="kube-system/kube-proxy-klwwh"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574363    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3a733ace-4ace-47c9-b6b9-8e5f65933c49-cni-cfg\") pod \"kindnet-2dvsb\" (UID: \"3a733ace-4ace-47c9-b6b9-8e5f65933c49\") " pod="kube-system/kindnet-2dvsb"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574380    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a583a7c-33a2-41cf-a1f9-cf86db9bd461-xtables-lock\") pod \"kube-proxy-klwwh\" (UID: \"5a583a7c-33a2-41cf-a1f9-cf86db9bd461\") " pod="kube-system/kube-proxy-klwwh"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574395    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6kj\" (UniqueName: \"kubernetes.io/projected/5a583a7c-33a2-41cf-a1f9-cf86db9bd461-kube-api-access-7c6kj\") pod \"kube-proxy-klwwh\" (UID: \"5a583a7c-33a2-41cf-a1f9-cf86db9bd461\") " pod="kube-system/kube-proxy-klwwh"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574484    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a733ace-4ace-47c9-b6b9-8e5f65933c49-lib-modules\") pod \"kindnet-2dvsb\" (UID: \"3a733ace-4ace-47c9-b6b9-8e5f65933c49\") " pod="kube-system/kindnet-2dvsb"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574553    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75hv\" (UniqueName: \"kubernetes.io/projected/3a733ace-4ace-47c9-b6b9-8e5f65933c49-kube-api-access-k75hv\") pod \"kindnet-2dvsb\" (UID: \"3a733ace-4ace-47c9-b6b9-8e5f65933c49\") " pod="kube-system/kindnet-2dvsb"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574593    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a583a7c-33a2-41cf-a1f9-cf86db9bd461-lib-modules\") pod \"kube-proxy-klwwh\" (UID: \"5a583a7c-33a2-41cf-a1f9-cf86db9bd461\") " pod="kube-system/kube-proxy-klwwh"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574615    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a733ace-4ace-47c9-b6b9-8e5f65933c49-xtables-lock\") pod \"kindnet-2dvsb\" (UID: \"3a733ace-4ace-47c9-b6b9-8e5f65933c49\") " pod="kube-system/kindnet-2dvsb"
	Nov 21 14:32:13 embed-certs-013140 kubelet[1495]: I1121 14:32:13.511376    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-klwwh" podStartSLOduration=1.511351405 podStartE2EDuration="1.511351405s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:13.511215762 +0000 UTC m=+6.164111971" watchObservedRunningTime="2025-11-21 14:32:13.511351405 +0000 UTC m=+6.164247618"
	Nov 21 14:32:13 embed-certs-013140 kubelet[1495]: I1121 14:32:13.548498    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2dvsb" podStartSLOduration=1.548470568 podStartE2EDuration="1.548470568s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:13.526393617 +0000 UTC m=+6.179289809" watchObservedRunningTime="2025-11-21 14:32:13.548470568 +0000 UTC m=+6.201366776"
	Nov 21 14:32:23 embed-certs-013140 kubelet[1495]: I1121 14:32:23.974827    1495 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.051600    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcf2q\" (UniqueName: \"kubernetes.io/projected/9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec-kube-api-access-kcf2q\") pod \"storage-provisioner\" (UID: \"9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec\") " pod="kube-system/storage-provisioner"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.051662    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vcm\" (UniqueName: \"kubernetes.io/projected/f98cd5f5-83b2-4a40-b75d-868145de6f36-kube-api-access-b7vcm\") pod \"coredns-66bc5c9577-r95cs\" (UID: \"f98cd5f5-83b2-4a40-b75d-868145de6f36\") " pod="kube-system/coredns-66bc5c9577-r95cs"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.051680    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec-tmp\") pod \"storage-provisioner\" (UID: \"9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec\") " pod="kube-system/storage-provisioner"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.051699    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f98cd5f5-83b2-4a40-b75d-868145de6f36-config-volume\") pod \"coredns-66bc5c9577-r95cs\" (UID: \"f98cd5f5-83b2-4a40-b75d-868145de6f36\") " pod="kube-system/coredns-66bc5c9577-r95cs"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.543186    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r95cs" podStartSLOduration=12.543164953 podStartE2EDuration="12.543164953s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:24.542742189 +0000 UTC m=+17.195638398" watchObservedRunningTime="2025-11-21 14:32:24.543164953 +0000 UTC m=+17.196061162"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.557880    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.55785693 podStartE2EDuration="12.55785693s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:24.557841468 +0000 UTC m=+17.210737677" watchObservedRunningTime="2025-11-21 14:32:24.55785693 +0000 UTC m=+17.210753137"
	Nov 21 14:32:27 embed-certs-013140 kubelet[1495]: I1121 14:32:27.475809    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s4x8\" (UniqueName: \"kubernetes.io/projected/9ddebfb3-80d6-4623-aa37-0e3ce0fef04f-kube-api-access-4s4x8\") pod \"busybox\" (UID: \"9ddebfb3-80d6-4623-aa37-0e3ce0fef04f\") " pod="default/busybox"
	Nov 21 14:32:31 embed-certs-013140 kubelet[1495]: I1121 14:32:31.566619    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.2856899849999999 podStartE2EDuration="4.566594953s" podCreationTimestamp="2025-11-21 14:32:27 +0000 UTC" firstStartedPulling="2025-11-21 14:32:27.819866709 +0000 UTC m=+20.472762975" lastFinishedPulling="2025-11-21 14:32:31.10077175 +0000 UTC m=+23.753667943" observedRunningTime="2025-11-21 14:32:31.566109697 +0000 UTC m=+24.219005905" watchObservedRunningTime="2025-11-21 14:32:31.566594953 +0000 UTC m=+24.219491161"
	
	
	==> storage-provisioner [b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63] <==
	I1121 14:32:24.527939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:32:24.537648       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:32:24.537721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:32:24.542268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:24.549712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:32:24.550229       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:32:24.550603       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-013140_b205943d-9003-4c7a-8a73-53a83151b14f!
	I1121 14:32:24.551701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ae412e9-189e-4d61-b533-a5ccb87a6e9d", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-013140_b205943d-9003-4c7a-8a73-53a83151b14f became leader
	W1121 14:32:24.553713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:24.558713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:32:24.651314       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-013140_b205943d-9003-4c7a-8a73-53a83151b14f!
	W1121 14:32:26.562195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:26.569153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:28.572609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:28.577680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:30.581421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:30.697747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:32.702001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:32.707365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:34.710627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:34.716319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:36.720402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:36.725468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:38.729827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:38.735318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-013140 -n embed-certs-013140
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-013140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-013140
helpers_test.go:243: (dbg) docker inspect embed-certs-013140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3",
	        "Created": "2025-11-21T14:31:46.141263741Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:31:46.187510679Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3/hostname",
	        "HostsPath": "/var/lib/docker/containers/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3/hosts",
	        "LogPath": "/var/lib/docker/containers/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3/cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3-json.log",
	        "Name": "/embed-certs-013140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-013140:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-013140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cd6ba875b6afc2ca3d509064c39d01a0b98424ae6a9248a8d28e21b3a6b37ba3",
	                "LowerDir": "/var/lib/docker/overlay2/fd941f498a6cf8c5ff5a99fd2dd988f0cc4bc487fd6f0021c002431f13528818-init/diff:/var/lib/docker/overlay2/a649757dd9587fa5a20ca8a56ec1923099f2a5e912dc7e8e1dfa08e79248b59f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd941f498a6cf8c5ff5a99fd2dd988f0cc4bc487fd6f0021c002431f13528818/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd941f498a6cf8c5ff5a99fd2dd988f0cc4bc487fd6f0021c002431f13528818/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd941f498a6cf8c5ff5a99fd2dd988f0cc4bc487fd6f0021c002431f13528818/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-013140",
	                "Source": "/var/lib/docker/volumes/embed-certs-013140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-013140",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-013140",
	                "name.minikube.sigs.k8s.io": "embed-certs-013140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "083559fae12cc0f6d45a8bada64938617602030398964146e6f624238aa2f06d",
	            "SandboxKey": "/var/run/docker/netns/083559fae12c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-013140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a37427e1a237ca51fcb15868ce9620ac7feb39d1a540dac313b302224f6cad5",
	                    "EndpointID": "af86ca8cc0e0b41aaeef87ac35ac8723f583be1717424c8f353f22ae7431544a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "3e:0e:43:b4:97:8b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-013140",
	                        "cd6ba875b6af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-013140 -n embed-certs-013140
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-013140 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-013140 logs -n 25: (1.138613662s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p default-k8s-diff-port-376255                                                                                                                                                                                                                     │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ unpause │ -p old-k8s-version-012258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p old-k8s-version-012258                                                                                                                                                                                                                           │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p default-k8s-diff-port-376255                                                                                                                                                                                                                     │ default-k8s-diff-port-376255 │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p disable-driver-mounts-088626                                                                                                                                                                                                                     │ disable-driver-mounts-088626 │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ start   │ -p embed-certs-013140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-013140           │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:32 UTC │
	│ delete  │ -p old-k8s-version-012258                                                                                                                                                                                                                           │ old-k8s-version-012258       │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ start   │ -p auto-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-459127                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:32 UTC │
	│ image   │ no-preload-921956 image list --format=json                                                                                                                                                                                                          │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ pause   │ -p no-preload-921956 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ unpause │ -p no-preload-921956 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p no-preload-921956                                                                                                                                                                                                                                │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ delete  │ -p no-preload-921956                                                                                                                                                                                                                                │ no-preload-921956            │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │ 21 Nov 25 14:31 UTC │
	│ start   │ -p kindnet-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd                                                                                                      │ kindnet-459127               │ jenkins │ v1.37.0 │ 21 Nov 25 14:31 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-163061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ stop    │ -p newest-cni-163061 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-163061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ start   │ -p newest-cni-163061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ image   │ newest-cni-163061 image list --format=json                                                                                                                                                                                                          │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ pause   │ -p newest-cni-163061 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ unpause │ -p newest-cni-163061 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ delete  │ -p newest-cni-163061                                                                                                                                                                                                                                │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ delete  │ -p newest-cni-163061                                                                                                                                                                                                                                │ newest-cni-163061            │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	│ start   │ -p calico-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd                                                                                                        │ calico-459127                │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │                     │
	│ ssh     │ -p auto-459127 pgrep -a kubelet                                                                                                                                                                                                                     │ auto-459127                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:32 UTC │ 21 Nov 25 14:32 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:32:25
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:32:25.574614  306176 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:32:25.574915  306176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:32:25.574927  306176 out.go:374] Setting ErrFile to fd 2...
	I1121 14:32:25.574938  306176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:32:25.575167  306176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:32:25.575670  306176 out.go:368] Setting JSON to false
	I1121 14:32:25.576858  306176 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4488,"bootTime":1763731058,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:32:25.576950  306176 start.go:143] virtualization: kvm guest
	I1121 14:32:25.579187  306176 out.go:179] * [calico-459127] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:32:25.581042  306176 notify.go:221] Checking for updates...
	I1121 14:32:25.581082  306176 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:32:25.582720  306176 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:32:25.584235  306176 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:32:25.585621  306176 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:32:25.587153  306176 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:32:25.588709  306176 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:32:25.591028  306176 config.go:182] Loaded profile config "auto-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:25.591193  306176 config.go:182] Loaded profile config "embed-certs-013140": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:25.591289  306176 config.go:182] Loaded profile config "kindnet-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:25.591418  306176 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:32:25.624674  306176 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:32:25.624844  306176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:32:25.704912  306176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:32:25.695049107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:32:25.705073  306176 docker.go:319] overlay module found
	I1121 14:32:25.707070  306176 out.go:179] * Using the docker driver based on user configuration
	I1121 14:32:25.708291  306176 start.go:309] selected driver: docker
	I1121 14:32:25.708308  306176 start.go:930] validating driver "docker" against <nil>
	I1121 14:32:25.708318  306176 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:32:25.708840  306176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:32:25.784750  306176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:32:25.772341931 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:32:25.784982  306176 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:32:25.785211  306176 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:32:25.787649  306176 out.go:179] * Using Docker driver with root privileges
	I1121 14:32:25.789192  306176 cni.go:84] Creating CNI manager for "calico"
	I1121 14:32:25.789218  306176 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1121 14:32:25.789342  306176 start.go:353] cluster config:
	{Name:calico-459127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:32:25.791158  306176 out.go:179] * Starting "calico-459127" primary control-plane node in "calico-459127" cluster
	I1121 14:32:25.792499  306176 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:32:25.794109  306176 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:32:25.795413  306176 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:32:25.795461  306176 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 14:32:25.795475  306176 cache.go:65] Caching tarball of preloaded images
	I1121 14:32:25.795505  306176 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:32:25.795607  306176 preload.go:238] Found /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1121 14:32:25.795627  306176 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:32:25.795796  306176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/config.json ...
	I1121 14:32:25.795830  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/config.json: {Name:mkcba83e453a390792167ca348e7a7efc2dd1ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:25.821977  306176 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:32:25.822007  306176 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:32:25.822028  306176 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:32:25.822068  306176 start.go:360] acquireMachinesLock for calico-459127: {Name:mk4243ffc2c1ce567ec0215b16e180977fa504cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:32:25.822215  306176 start.go:364] duration metric: took 123.047µs to acquireMachinesLock for "calico-459127"
	I1121 14:32:25.822252  306176 start.go:93] Provisioning new machine with config: &{Name:calico-459127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:32:25.822349  306176 start.go:125] createHost starting for "" (driver="docker")
	W1121 14:32:23.543989  290092 node_ready.go:57] node "embed-certs-013140" has "Ready":"False" status (will retry)
	I1121 14:32:24.043525  290092 node_ready.go:49] node "embed-certs-013140" is "Ready"
	I1121 14:32:24.043572  290092 node_ready.go:38] duration metric: took 12.003797013s for node "embed-certs-013140" to be "Ready" ...
	I1121 14:32:24.043589  290092 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:32:24.043635  290092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:32:24.057319  290092 api_server.go:72] duration metric: took 12.340324426s to wait for apiserver process to appear ...
	I1121 14:32:24.057348  290092 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:32:24.057371  290092 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:32:24.062890  290092 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:32:24.063908  290092 api_server.go:141] control plane version: v1.34.1
	I1121 14:32:24.063963  290092 api_server.go:131] duration metric: took 6.582168ms to wait for apiserver health ...
	I1121 14:32:24.063980  290092 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:32:24.067640  290092 system_pods.go:59] 8 kube-system pods found
	I1121 14:32:24.067680  290092 system_pods.go:61] "coredns-66bc5c9577-r95cs" [f98cd5f5-83b2-4a40-b75d-868145de6f36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:24.067705  290092 system_pods.go:61] "etcd-embed-certs-013140" [48adac7e-ce34-4f76-9d7e-f066a08d5674] Running
	I1121 14:32:24.067714  290092 system_pods.go:61] "kindnet-2dvsb" [3a733ace-4ace-47c9-b6b9-8e5f65933c49] Running
	I1121 14:32:24.067718  290092 system_pods.go:61] "kube-apiserver-embed-certs-013140" [61277438-5d91-49ac-bc7b-e6fdd718b06d] Running
	I1121 14:32:24.067722  290092 system_pods.go:61] "kube-controller-manager-embed-certs-013140" [25f8e054-6147-477c-b4c0-6d919ef9154e] Running
	I1121 14:32:24.067728  290092 system_pods.go:61] "kube-proxy-klwwh" [5a583a7c-33a2-41cf-a1f9-cf86db9bd461] Running
	I1121 14:32:24.067732  290092 system_pods.go:61] "kube-scheduler-embed-certs-013140" [dbd79435-c118-436d-9baa-dcab2d85b718] Running
	I1121 14:32:24.067739  290092 system_pods.go:61] "storage-provisioner" [9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:24.067755  290092 system_pods.go:74] duration metric: took 3.765355ms to wait for pod list to return data ...
	I1121 14:32:24.067767  290092 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:32:24.070586  290092 default_sa.go:45] found service account: "default"
	I1121 14:32:24.070610  290092 default_sa.go:55] duration metric: took 2.831061ms for default service account to be created ...
	I1121 14:32:24.070628  290092 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:32:24.073446  290092 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:24.073474  290092 system_pods.go:89] "coredns-66bc5c9577-r95cs" [f98cd5f5-83b2-4a40-b75d-868145de6f36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:24.073479  290092 system_pods.go:89] "etcd-embed-certs-013140" [48adac7e-ce34-4f76-9d7e-f066a08d5674] Running
	I1121 14:32:24.073489  290092 system_pods.go:89] "kindnet-2dvsb" [3a733ace-4ace-47c9-b6b9-8e5f65933c49] Running
	I1121 14:32:24.073493  290092 system_pods.go:89] "kube-apiserver-embed-certs-013140" [61277438-5d91-49ac-bc7b-e6fdd718b06d] Running
	I1121 14:32:24.073497  290092 system_pods.go:89] "kube-controller-manager-embed-certs-013140" [25f8e054-6147-477c-b4c0-6d919ef9154e] Running
	I1121 14:32:24.073500  290092 system_pods.go:89] "kube-proxy-klwwh" [5a583a7c-33a2-41cf-a1f9-cf86db9bd461] Running
	I1121 14:32:24.073503  290092 system_pods.go:89] "kube-scheduler-embed-certs-013140" [dbd79435-c118-436d-9baa-dcab2d85b718] Running
	I1121 14:32:24.073507  290092 system_pods.go:89] "storage-provisioner" [9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:24.073524  290092 retry.go:31] will retry after 302.452336ms: missing components: kube-dns
	I1121 14:32:24.382061  290092 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:24.382105  290092 system_pods.go:89] "coredns-66bc5c9577-r95cs" [f98cd5f5-83b2-4a40-b75d-868145de6f36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:24.382114  290092 system_pods.go:89] "etcd-embed-certs-013140" [48adac7e-ce34-4f76-9d7e-f066a08d5674] Running
	I1121 14:32:24.382122  290092 system_pods.go:89] "kindnet-2dvsb" [3a733ace-4ace-47c9-b6b9-8e5f65933c49] Running
	I1121 14:32:24.382127  290092 system_pods.go:89] "kube-apiserver-embed-certs-013140" [61277438-5d91-49ac-bc7b-e6fdd718b06d] Running
	I1121 14:32:24.382132  290092 system_pods.go:89] "kube-controller-manager-embed-certs-013140" [25f8e054-6147-477c-b4c0-6d919ef9154e] Running
	I1121 14:32:24.382142  290092 system_pods.go:89] "kube-proxy-klwwh" [5a583a7c-33a2-41cf-a1f9-cf86db9bd461] Running
	I1121 14:32:24.382148  290092 system_pods.go:89] "kube-scheduler-embed-certs-013140" [dbd79435-c118-436d-9baa-dcab2d85b718] Running
	I1121 14:32:24.382155  290092 system_pods.go:89] "storage-provisioner" [9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:24.382176  290092 retry.go:31] will retry after 240.891701ms: missing components: kube-dns
	I1121 14:32:24.627792  290092 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:24.627829  290092 system_pods.go:89] "coredns-66bc5c9577-r95cs" [f98cd5f5-83b2-4a40-b75d-868145de6f36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:24.627838  290092 system_pods.go:89] "etcd-embed-certs-013140" [48adac7e-ce34-4f76-9d7e-f066a08d5674] Running
	I1121 14:32:24.627845  290092 system_pods.go:89] "kindnet-2dvsb" [3a733ace-4ace-47c9-b6b9-8e5f65933c49] Running
	I1121 14:32:24.627849  290092 system_pods.go:89] "kube-apiserver-embed-certs-013140" [61277438-5d91-49ac-bc7b-e6fdd718b06d] Running
	I1121 14:32:24.627860  290092 system_pods.go:89] "kube-controller-manager-embed-certs-013140" [25f8e054-6147-477c-b4c0-6d919ef9154e] Running
	I1121 14:32:24.627866  290092 system_pods.go:89] "kube-proxy-klwwh" [5a583a7c-33a2-41cf-a1f9-cf86db9bd461] Running
	I1121 14:32:24.627873  290092 system_pods.go:89] "kube-scheduler-embed-certs-013140" [dbd79435-c118-436d-9baa-dcab2d85b718] Running
	I1121 14:32:24.627878  290092 system_pods.go:89] "storage-provisioner" [9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec] Running
	I1121 14:32:24.627889  290092 system_pods.go:126] duration metric: took 557.254395ms to wait for k8s-apps to be running ...
	I1121 14:32:24.627903  290092 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:32:24.627954  290092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:32:24.641532  290092 system_svc.go:56] duration metric: took 13.619346ms WaitForService to wait for kubelet
	I1121 14:32:24.641578  290092 kubeadm.go:587] duration metric: took 12.924592276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:32:24.641603  290092 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:32:24.645040  290092 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:32:24.645064  290092 node_conditions.go:123] node cpu capacity is 8
	I1121 14:32:24.645087  290092 node_conditions.go:105] duration metric: took 3.476716ms to run NodePressure ...
	I1121 14:32:24.645099  290092 start.go:242] waiting for startup goroutines ...
	I1121 14:32:24.645109  290092 start.go:247] waiting for cluster config update ...
	I1121 14:32:24.645118  290092 start.go:256] writing updated cluster config ...
	I1121 14:32:24.645370  290092 ssh_runner.go:195] Run: rm -f paused
	I1121 14:32:24.649592  290092 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:32:24.653589  290092 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r95cs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.661795  290092 pod_ready.go:94] pod "coredns-66bc5c9577-r95cs" is "Ready"
	I1121 14:32:25.661831  290092 pod_ready.go:86] duration metric: took 1.008214212s for pod "coredns-66bc5c9577-r95cs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.665095  290092 pod_ready.go:83] waiting for pod "etcd-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.674067  290092 pod_ready.go:94] pod "etcd-embed-certs-013140" is "Ready"
	I1121 14:32:25.674102  290092 pod_ready.go:86] duration metric: took 8.959279ms for pod "etcd-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.678172  290092 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.684812  290092 pod_ready.go:94] pod "kube-apiserver-embed-certs-013140" is "Ready"
	I1121 14:32:25.684903  290092 pod_ready.go:86] duration metric: took 6.621169ms for pod "kube-apiserver-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.688844  290092 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.860621  290092 pod_ready.go:94] pod "kube-controller-manager-embed-certs-013140" is "Ready"
	I1121 14:32:25.860652  290092 pod_ready.go:86] duration metric: took 171.708723ms for pod "kube-controller-manager-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.058046  290092 pod_ready.go:83] waiting for pod "kube-proxy-klwwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.458526  290092 pod_ready.go:94] pod "kube-proxy-klwwh" is "Ready"
	I1121 14:32:26.458588  290092 pod_ready.go:86] duration metric: took 400.510052ms for pod "kube-proxy-klwwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.659912  290092 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:25.319154  296399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:32:25.818741  296399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:32:26.319328  296399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:32:26.819253  296399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:32:26.909328  296399 kubeadm.go:1114] duration metric: took 4.760483045s to wait for elevateKubeSystemPrivileges
	I1121 14:32:26.909366  296399 kubeadm.go:403] duration metric: took 18.835973773s to StartCluster
	I1121 14:32:26.909390  296399 settings.go:142] acquiring lock: {Name:mkfe3f8167491ec1abfca3e17282002404072955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:26.909471  296399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:32:26.911434  296399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/kubeconfig: {Name:mk5d3e3ed379bd47c91313113a93ad7e3f44dbb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:26.911715  296399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:32:26.911718  296399 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:32:26.911825  296399 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:32:26.911915  296399 addons.go:70] Setting storage-provisioner=true in profile "kindnet-459127"
	I1121 14:32:26.911929  296399 config.go:182] Loaded profile config "kindnet-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:26.911944  296399 addons.go:239] Setting addon storage-provisioner=true in "kindnet-459127"
	I1121 14:32:26.911995  296399 host.go:66] Checking if "kindnet-459127" exists ...
	I1121 14:32:26.912068  296399 addons.go:70] Setting default-storageclass=true in profile "kindnet-459127"
	I1121 14:32:26.912097  296399 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-459127"
	I1121 14:32:26.912416  296399 cli_runner.go:164] Run: docker container inspect kindnet-459127 --format={{.State.Status}}
	I1121 14:32:26.912618  296399 cli_runner.go:164] Run: docker container inspect kindnet-459127 --format={{.State.Status}}
	I1121 14:32:26.913475  296399 out.go:179] * Verifying Kubernetes components...
	I1121 14:32:26.915214  296399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:32:26.944336  296399 addons.go:239] Setting addon default-storageclass=true in "kindnet-459127"
	I1121 14:32:26.944379  296399 host.go:66] Checking if "kindnet-459127" exists ...
	I1121 14:32:26.945021  296399 cli_runner.go:164] Run: docker container inspect kindnet-459127 --format={{.State.Status}}
	I1121 14:32:26.946999  296399 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:32:27.057868  290092 pod_ready.go:94] pod "kube-scheduler-embed-certs-013140" is "Ready"
	I1121 14:32:27.057896  290092 pod_ready.go:86] duration metric: took 397.954176ms for pod "kube-scheduler-embed-certs-013140" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:27.057912  290092 pod_ready.go:40] duration metric: took 2.408282324s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:32:27.120222  290092 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:32:27.121833  290092 out.go:179] * Done! kubectl is now configured to use "embed-certs-013140" cluster and "default" namespace by default
	I1121 14:32:26.948840  296399 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:32:26.948875  296399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:32:26.948980  296399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459127
	I1121 14:32:26.983191  296399 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:32:26.983218  296399 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:32:26.983291  296399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459127
	I1121 14:32:26.985952  296399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/kindnet-459127/id_rsa Username:docker}
	I1121 14:32:27.014487  296399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/kindnet-459127/id_rsa Username:docker}
	I1121 14:32:27.046782  296399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:32:27.102315  296399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:32:27.116897  296399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:32:27.138025  296399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:32:27.288845  296399 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:32:27.292022  296399 node_ready.go:35] waiting up to 15m0s for node "kindnet-459127" to be "Ready" ...
	I1121 14:32:27.555466  296399 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1121 14:32:23.819696  290817 node_ready.go:57] node "auto-459127" has "Ready":"False" status (will retry)
	I1121 14:32:25.822279  290817 node_ready.go:49] node "auto-459127" is "Ready"
	I1121 14:32:25.822304  290817 node_ready.go:38] duration metric: took 11.00663893s for node "auto-459127" to be "Ready" ...
	I1121 14:32:25.822321  290817 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:32:25.822385  290817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:32:25.841862  290817 api_server.go:72] duration metric: took 11.686182135s to wait for apiserver process to appear ...
	I1121 14:32:25.841894  290817 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:32:25.841917  290817 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:32:25.847430  290817 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:32:25.848681  290817 api_server.go:141] control plane version: v1.34.1
	I1121 14:32:25.848711  290817 api_server.go:131] duration metric: took 6.809354ms to wait for apiserver health ...
	I1121 14:32:25.848722  290817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:32:25.855426  290817 system_pods.go:59] 8 kube-system pods found
	I1121 14:32:25.855482  290817 system_pods.go:61] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:25.855494  290817 system_pods.go:61] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:25.855509  290817 system_pods.go:61] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:25.855521  290817 system_pods.go:61] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:25.855532  290817 system_pods.go:61] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:25.855614  290817 system_pods.go:61] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:25.855637  290817 system_pods.go:61] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:25.856625  290817 system_pods.go:61] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:25.856645  290817 system_pods.go:74] duration metric: took 7.914135ms to wait for pod list to return data ...
	I1121 14:32:25.856659  290817 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:32:25.860414  290817 default_sa.go:45] found service account: "default"
	I1121 14:32:25.860450  290817 default_sa.go:55] duration metric: took 3.776975ms for default service account to be created ...
	I1121 14:32:25.860461  290817 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:32:25.868796  290817 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:25.868837  290817 system_pods.go:89] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:25.868845  290817 system_pods.go:89] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:25.868853  290817 system_pods.go:89] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:25.868858  290817 system_pods.go:89] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:25.868863  290817 system_pods.go:89] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:25.868868  290817 system_pods.go:89] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:25.868873  290817 system_pods.go:89] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:25.868880  290817 system_pods.go:89] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:25.868906  290817 retry.go:31] will retry after 209.914619ms: missing components: kube-dns
	I1121 14:32:26.083950  290817 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:26.084003  290817 system_pods.go:89] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:26.084014  290817 system_pods.go:89] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:26.084024  290817 system_pods.go:89] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:26.084032  290817 system_pods.go:89] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:26.084039  290817 system_pods.go:89] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:26.084056  290817 system_pods.go:89] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:26.084063  290817 system_pods.go:89] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:26.084076  290817 system_pods.go:89] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:26.084101  290817 retry.go:31] will retry after 381.085812ms: missing components: kube-dns
	I1121 14:32:26.469503  290817 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:26.469569  290817 system_pods.go:89] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:32:26.469581  290817 system_pods.go:89] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:26.469591  290817 system_pods.go:89] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:26.469599  290817 system_pods.go:89] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:26.469607  290817 system_pods.go:89] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:26.469614  290817 system_pods.go:89] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:26.469620  290817 system_pods.go:89] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:26.469629  290817 system_pods.go:89] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:32:26.469653  290817 retry.go:31] will retry after 426.864838ms: missing components: kube-dns
	I1121 14:32:26.901250  290817 system_pods.go:86] 8 kube-system pods found
	I1121 14:32:26.901284  290817 system_pods.go:89] "coredns-66bc5c9577-bqr8h" [b178736d-9f21-4662-bfb0-34d6e721e7ff] Running
	I1121 14:32:26.901295  290817 system_pods.go:89] "etcd-auto-459127" [0e3ae254-bffd-4095-81ca-0c0cda14b7a8] Running
	I1121 14:32:26.901300  290817 system_pods.go:89] "kindnet-5twqm" [8b6c17bf-5774-4417-97f9-16b78c95446f] Running
	I1121 14:32:26.901305  290817 system_pods.go:89] "kube-apiserver-auto-459127" [25b4879e-e0d4-4029-9f8a-226d76793f01] Running
	I1121 14:32:26.901311  290817 system_pods.go:89] "kube-controller-manager-auto-459127" [b0aac631-455a-411b-88f7-dcc10cc6743a] Running
	I1121 14:32:26.901317  290817 system_pods.go:89] "kube-proxy-2n8t9" [705a0196-a043-494c-b8e2-da476def44dc] Running
	I1121 14:32:26.901325  290817 system_pods.go:89] "kube-scheduler-auto-459127" [54482a2a-8f7d-41fe-ad55-4dd0d01027e6] Running
	I1121 14:32:26.901330  290817 system_pods.go:89] "storage-provisioner" [4f2ed244-b242-4e64-8531-c420d67ce642] Running
	I1121 14:32:26.901340  290817 system_pods.go:126] duration metric: took 1.040871748s to wait for k8s-apps to be running ...
	I1121 14:32:26.901354  290817 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:32:26.901401  290817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:32:26.918330  290817 system_svc.go:56] duration metric: took 16.951306ms WaitForService to wait for kubelet
	I1121 14:32:26.918370  290817 kubeadm.go:587] duration metric: took 12.762695032s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:32:26.918395  290817 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:32:26.925066  290817 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:32:26.925098  290817 node_conditions.go:123] node cpu capacity is 8
	I1121 14:32:26.925187  290817 node_conditions.go:105] duration metric: took 6.784662ms to run NodePressure ...
	I1121 14:32:26.925220  290817 start.go:242] waiting for startup goroutines ...
	I1121 14:32:26.925230  290817 start.go:247] waiting for cluster config update ...
	I1121 14:32:26.925281  290817 start.go:256] writing updated cluster config ...
	I1121 14:32:26.925733  290817 ssh_runner.go:195] Run: rm -f paused
	I1121 14:32:26.934635  290817 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:32:26.939829  290817 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bqr8h" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.947802  290817 pod_ready.go:94] pod "coredns-66bc5c9577-bqr8h" is "Ready"
	I1121 14:32:26.947835  290817 pod_ready.go:86] duration metric: took 7.969545ms for pod "coredns-66bc5c9577-bqr8h" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.951820  290817 pod_ready.go:83] waiting for pod "etcd-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.962526  290817 pod_ready.go:94] pod "etcd-auto-459127" is "Ready"
	I1121 14:32:26.962645  290817 pod_ready.go:86] duration metric: took 10.796396ms for pod "etcd-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.967673  290817 pod_ready.go:83] waiting for pod "kube-apiserver-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.980976  290817 pod_ready.go:94] pod "kube-apiserver-auto-459127" is "Ready"
	I1121 14:32:26.981007  290817 pod_ready.go:86] duration metric: took 13.150715ms for pod "kube-apiserver-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:26.986485  290817 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:27.340483  290817 pod_ready.go:94] pod "kube-controller-manager-auto-459127" is "Ready"
	I1121 14:32:27.340513  290817 pod_ready.go:86] duration metric: took 354.007416ms for pod "kube-controller-manager-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:27.541234  290817 pod_ready.go:83] waiting for pod "kube-proxy-2n8t9" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:27.940047  290817 pod_ready.go:94] pod "kube-proxy-2n8t9" is "Ready"
	I1121 14:32:27.940075  290817 pod_ready.go:86] duration metric: took 398.804636ms for pod "kube-proxy-2n8t9" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:28.140279  290817 pod_ready.go:83] waiting for pod "kube-scheduler-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:28.540502  290817 pod_ready.go:94] pod "kube-scheduler-auto-459127" is "Ready"
	I1121 14:32:28.540527  290817 pod_ready.go:86] duration metric: took 400.223189ms for pod "kube-scheduler-auto-459127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:32:28.540578  290817 pod_ready.go:40] duration metric: took 1.605845616s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:32:28.591849  290817 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:32:28.594267  290817 out.go:179] * Done! kubectl is now configured to use "auto-459127" cluster and "default" namespace by default
	I1121 14:32:27.556937  296399 addons.go:530] duration metric: took 645.10611ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:32:27.794789  296399 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-459127" context rescaled to 1 replicas
	W1121 14:32:29.295804  296399 node_ready.go:57] node "kindnet-459127" has "Ready":"False" status (will retry)
	I1121 14:32:25.828660  306176 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:32:25.829043  306176 start.go:159] libmachine.API.Create for "calico-459127" (driver="docker")
	I1121 14:32:25.829092  306176 client.go:173] LocalClient.Create starting
	I1121 14:32:25.829185  306176 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem
	I1121 14:32:25.829222  306176 main.go:143] libmachine: Decoding PEM data...
	I1121 14:32:25.829239  306176 main.go:143] libmachine: Parsing certificate...
	I1121 14:32:25.829324  306176 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem
	I1121 14:32:25.829350  306176 main.go:143] libmachine: Decoding PEM data...
	I1121 14:32:25.829364  306176 main.go:143] libmachine: Parsing certificate...
	I1121 14:32:25.829876  306176 cli_runner.go:164] Run: docker network inspect calico-459127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:32:25.853972  306176 cli_runner.go:211] docker network inspect calico-459127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:32:25.854071  306176 network_create.go:284] running [docker network inspect calico-459127] to gather additional debugging logs...
	I1121 14:32:25.854094  306176 cli_runner.go:164] Run: docker network inspect calico-459127
	W1121 14:32:25.882020  306176 cli_runner.go:211] docker network inspect calico-459127 returned with exit code 1
	I1121 14:32:25.882059  306176 network_create.go:287] error running [docker network inspect calico-459127]: docker network inspect calico-459127: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-459127 not found
	I1121 14:32:25.882092  306176 network_create.go:289] output of [docker network inspect calico-459127]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-459127 not found
	
	** /stderr **
	I1121 14:32:25.882216  306176 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:32:25.907166  306176 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
	I1121 14:32:25.907971  306176 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-39921db0d513 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:e4:85:98:a5:e3} reservation:<nil>}
	I1121 14:32:25.908985  306176 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-36a8741c90a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:21:99:72:63:4a} reservation:<nil>}
	I1121 14:32:25.911137  306176 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001de39e0}
	I1121 14:32:25.911175  306176 network_create.go:124] attempt to create docker network calico-459127 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:32:25.911233  306176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-459127 calico-459127
	I1121 14:32:25.977496  306176 network_create.go:108] docker network calico-459127 192.168.76.0/24 created
	I1121 14:32:25.977534  306176 kic.go:121] calculated static IP "192.168.76.2" for the "calico-459127" container
	I1121 14:32:25.977667  306176 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:32:25.999630  306176 cli_runner.go:164] Run: docker volume create calico-459127 --label name.minikube.sigs.k8s.io=calico-459127 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:32:26.020859  306176 oci.go:103] Successfully created a docker volume calico-459127
	I1121 14:32:26.020942  306176 cli_runner.go:164] Run: docker run --rm --name calico-459127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-459127 --entrypoint /usr/bin/test -v calico-459127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:32:26.451151  306176 oci.go:107] Successfully prepared a docker volume calico-459127
	I1121 14:32:26.451255  306176 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:32:26.451269  306176 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:32:26.451375  306176 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-459127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:32:31.334283  306176 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-459127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.882832271s)
	I1121 14:32:31.334325  306176 kic.go:203] duration metric: took 4.883051074s to extract preloaded images to volume ...
	W1121 14:32:31.334427  306176 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:32:31.334480  306176 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:32:31.334528  306176 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:32:31.401637  306176 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-459127 --name calico-459127 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-459127 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-459127 --network calico-459127 --ip 192.168.76.2 --volume calico-459127:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:32:31.768395  306176 cli_runner.go:164] Run: docker container inspect calico-459127 --format={{.State.Running}}
	I1121 14:32:31.790739  306176 cli_runner.go:164] Run: docker container inspect calico-459127 --format={{.State.Status}}
	I1121 14:32:31.812630  306176 cli_runner.go:164] Run: docker exec calico-459127 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:32:31.864206  306176 oci.go:144] the created container "calico-459127" has a running status.
	I1121 14:32:31.864245  306176 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa...
	I1121 14:32:31.998765  306176 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:32:32.031738  306176 cli_runner.go:164] Run: docker container inspect calico-459127 --format={{.State.Status}}
	I1121 14:32:32.059870  306176 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:32:32.059895  306176 kic_runner.go:114] Args: [docker exec --privileged calico-459127 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:32:32.112824  306176 cli_runner.go:164] Run: docker container inspect calico-459127 --format={{.State.Status}}
	I1121 14:32:32.145315  306176 machine.go:94] provisionDockerMachine start ...
	I1121 14:32:32.145425  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:32.175423  306176 main.go:143] libmachine: Using SSH client type: native
	I1121 14:32:32.175891  306176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1121 14:32:32.175942  306176 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:32:32.340517  306176 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-459127
	
	I1121 14:32:32.340580  306176 ubuntu.go:182] provisioning hostname "calico-459127"
	I1121 14:32:32.340658  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:32.364531  306176 main.go:143] libmachine: Using SSH client type: native
	I1121 14:32:32.364863  306176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1121 14:32:32.364885  306176 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-459127 && echo "calico-459127" | sudo tee /etc/hostname
	I1121 14:32:32.525073  306176 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-459127
	
	I1121 14:32:32.525164  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:32.549227  306176 main.go:143] libmachine: Using SSH client type: native
	I1121 14:32:32.549502  306176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1121 14:32:32.549524  306176 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-459127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-459127/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-459127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:32:32.698003  306176 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:32:32.698037  306176 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11004/.minikube}
	I1121 14:32:32.698086  306176 ubuntu.go:190] setting up certificates
	I1121 14:32:32.698097  306176 provision.go:84] configureAuth start
	I1121 14:32:32.698182  306176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-459127
	I1121 14:32:32.722163  306176 provision.go:143] copyHostCerts
	I1121 14:32:32.722240  306176 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem, removing ...
	I1121 14:32:32.722254  306176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem
	I1121 14:32:32.722333  306176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/ca.pem (1078 bytes)
	I1121 14:32:32.722477  306176 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem, removing ...
	I1121 14:32:32.722490  306176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem
	I1121 14:32:32.722531  306176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/cert.pem (1123 bytes)
	I1121 14:32:32.722650  306176 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem, removing ...
	I1121 14:32:32.722662  306176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem
	I1121 14:32:32.722697  306176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11004/.minikube/key.pem (1675 bytes)
	I1121 14:32:32.722783  306176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem org=jenkins.calico-459127 san=[127.0.0.1 192.168.76.2 calico-459127 localhost minikube]
	I1121 14:32:33.334931  306176 provision.go:177] copyRemoteCerts
	I1121 14:32:33.334993  306176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:32:33.335029  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.352615  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.452282  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:32:33.474393  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 14:32:33.494035  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:32:33.512800  306176 provision.go:87] duration metric: took 814.686907ms to configureAuth
	I1121 14:32:33.512847  306176 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:32:33.513050  306176 config.go:182] Loaded profile config "calico-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:32:33.513063  306176 machine.go:97] duration metric: took 1.367725245s to provisionDockerMachine
	I1121 14:32:33.513070  306176 client.go:176] duration metric: took 7.683971643s to LocalClient.Create
	I1121 14:32:33.513090  306176 start.go:167] duration metric: took 7.684053907s to libmachine.API.Create "calico-459127"
	I1121 14:32:33.513103  306176 start.go:293] postStartSetup for "calico-459127" (driver="docker")
	I1121 14:32:33.513114  306176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:32:33.513178  306176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:32:33.513220  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.533670  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.639457  306176 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:32:33.644098  306176 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:32:33.644136  306176 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:32:33.644157  306176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/addons for local assets ...
	I1121 14:32:33.644209  306176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11004/.minikube/files for local assets ...
	I1121 14:32:33.644312  306176 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem -> 145232.pem in /etc/ssl/certs
	I1121 14:32:33.644438  306176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:32:33.653265  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:32:33.678710  306176 start.go:296] duration metric: took 165.592945ms for postStartSetup
	I1121 14:32:33.679032  306176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-459127
	I1121 14:32:33.698668  306176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/config.json ...
	I1121 14:32:33.699049  306176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:32:33.699099  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.719149  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.814333  306176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:32:33.819681  306176 start.go:128] duration metric: took 7.997311178s to createHost
	I1121 14:32:33.819712  306176 start.go:83] releasing machines lock for "calico-459127", held for 7.99748051s
	I1121 14:32:33.819788  306176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-459127
	I1121 14:32:33.839243  306176 ssh_runner.go:195] Run: cat /version.json
	I1121 14:32:33.839296  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.839308  306176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:32:33.839381  306176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-459127
	I1121 14:32:33.858978  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.860292  306176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/calico-459127/id_rsa Username:docker}
	I1121 14:32:33.953331  306176 ssh_runner.go:195] Run: systemctl --version
	I1121 14:32:34.012582  306176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:32:34.017743  306176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:32:34.017798  306176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:32:34.045369  306176 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:32:34.045398  306176 start.go:496] detecting cgroup driver to use...
	I1121 14:32:34.045435  306176 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:32:34.045637  306176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:32:34.060777  306176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:32:34.074963  306176 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:32:34.075024  306176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:32:34.092095  306176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:32:34.110941  306176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:32:34.192979  306176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:32:34.299046  306176 docker.go:234] disabling docker service ...
	I1121 14:32:34.299105  306176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:32:34.319462  306176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:32:34.332669  306176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:32:34.417677  306176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:32:34.507275  306176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:32:34.521363  306176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:32:34.537081  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:32:34.548441  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:32:34.559336  306176 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1121 14:32:34.559400  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1121 14:32:34.569217  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:32:34.579343  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:32:34.589163  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:32:34.599142  306176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:32:34.608443  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:32:34.617869  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:32:34.627889  306176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:32:34.637745  306176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:32:34.646093  306176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:32:34.654819  306176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:32:34.740746  306176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:32:34.837769  306176 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:32:34.837826  306176 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:32:34.842128  306176 start.go:564] Will wait 60s for crictl version
	I1121 14:32:34.842177  306176 ssh_runner.go:195] Run: which crictl
	I1121 14:32:34.845822  306176 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:32:34.872806  306176 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:32:34.872870  306176 ssh_runner.go:195] Run: containerd --version
	I1121 14:32:34.896107  306176 ssh_runner.go:195] Run: containerd --version
	I1121 14:32:34.920572  306176 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1121 14:32:31.315006  296399 node_ready.go:57] node "kindnet-459127" has "Ready":"False" status (will retry)
	W1121 14:32:33.795994  296399 node_ready.go:57] node "kindnet-459127" has "Ready":"False" status (will retry)
	I1121 14:32:34.921960  306176 cli_runner.go:164] Run: docker network inspect calico-459127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:32:34.940365  306176 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:32:34.945035  306176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:32:34.956162  306176 kubeadm.go:884] updating cluster {Name:calico-459127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:32:34.956374  306176 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:32:34.956492  306176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:32:34.984067  306176 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:32:34.984091  306176 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:32:34.984150  306176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:32:35.011607  306176 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:32:35.011632  306176 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:32:35.011640  306176 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1121 14:32:35.011722  306176 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-459127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1121 14:32:35.011783  306176 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:32:35.039216  306176 cni.go:84] Creating CNI manager for "calico"
	I1121 14:32:35.039243  306176 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:32:35.039265  306176 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-459127 NodeName:calico-459127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:32:35.039421  306176 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-459127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:32:35.039479  306176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:32:35.048351  306176 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:32:35.048417  306176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:32:35.057236  306176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1121 14:32:35.071942  306176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:32:35.088624  306176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1121 14:32:35.102148  306176 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:32:35.106208  306176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:32:35.116713  306176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:32:35.200255  306176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:32:35.224100  306176 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127 for IP: 192.168.76.2
	I1121 14:32:35.224125  306176 certs.go:195] generating shared ca certs ...
	I1121 14:32:35.224146  306176 certs.go:227] acquiring lock for ca certs: {Name:mk4ac68319839cd6684afc66121341297238277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.224303  306176 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key
	I1121 14:32:35.224362  306176 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key
	I1121 14:32:35.224376  306176 certs.go:257] generating profile certs ...
	I1121 14:32:35.224427  306176 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.key
	I1121 14:32:35.224440  306176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.crt with IP's: []
	I1121 14:32:35.568044  306176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.crt ...
	I1121 14:32:35.568073  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.crt: {Name:mk450d484243924f14a08823e015fad0352b4312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.568260  306176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.key ...
	I1121 14:32:35.568272  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/client.key: {Name:mk060861d7049505ad8aec0ac68a7c5386c7739f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.568350  306176 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key.d2e425c6
	I1121 14:32:35.568365  306176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt.d2e425c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:32:35.695298  306176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt.d2e425c6 ...
	I1121 14:32:35.695327  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt.d2e425c6: {Name:mke66ab9acdc743d0c116c2b3d4cb6372025668b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.695497  306176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key.d2e425c6 ...
	I1121 14:32:35.695510  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key.d2e425c6: {Name:mk091aa1ce01c8419773da7bc1cd95a9840ca4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.695612  306176 certs.go:382] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt.d2e425c6 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt
	I1121 14:32:35.695713  306176 certs.go:386] copying /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key.d2e425c6 -> /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key
	I1121 14:32:35.695798  306176 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.key
	I1121 14:32:35.695813  306176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.crt with IP's: []
	I1121 14:32:35.914666  306176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.crt ...
	I1121 14:32:35.914692  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.crt: {Name:mk1ece06d080c593025e2487478e531af9572914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.914878  306176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.key ...
	I1121 14:32:35.914901  306176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.key: {Name:mk6dc3ecb5ea0fcd797e8da375dce43678c5a603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:32:35.915115  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem (1338 bytes)
	W1121 14:32:35.915163  306176 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523_empty.pem, impossibly tiny 0 bytes
	I1121 14:32:35.915178  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:32:35.915209  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:32:35.915267  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:32:35.915310  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/certs/key.pem (1675 bytes)
	I1121 14:32:35.915368  306176 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem (1708 bytes)
	I1121 14:32:35.915994  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:32:35.935759  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:32:35.956630  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:32:35.975642  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:32:35.996827  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 14:32:36.018106  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:32:36.037653  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:32:36.056435  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/calico-459127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:32:36.074680  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:32:36.097219  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/certs/14523.pem --> /usr/share/ca-certificates/14523.pem (1338 bytes)
	I1121 14:32:36.115873  306176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/ssl/certs/145232.pem --> /usr/share/ca-certificates/145232.pem (1708 bytes)
	I1121 14:32:36.134651  306176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:32:36.148459  306176 ssh_runner.go:195] Run: openssl version
	I1121 14:32:36.154894  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14523.pem && ln -fs /usr/share/ca-certificates/14523.pem /etc/ssl/certs/14523.pem"
	I1121 14:32:36.165274  306176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14523.pem
	I1121 14:32:36.169384  306176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14523.pem
	I1121 14:32:36.169440  306176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14523.pem
	I1121 14:32:36.206052  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14523.pem /etc/ssl/certs/51391683.0"
	I1121 14:32:36.215812  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145232.pem && ln -fs /usr/share/ca-certificates/145232.pem /etc/ssl/certs/145232.pem"
	I1121 14:32:36.226243  306176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145232.pem
	I1121 14:32:36.230447  306176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145232.pem
	I1121 14:32:36.230534  306176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145232.pem
	I1121 14:32:36.268383  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145232.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:32:36.278059  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:32:36.287506  306176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:32:36.291729  306176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:32:36.291794  306176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:32:36.329898  306176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:32:36.339754  306176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:32:36.343759  306176 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:32:36.343816  306176 kubeadm.go:401] StartCluster: {Name:calico-459127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-459127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:32:36.343873  306176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:32:36.343932  306176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:32:36.373595  306176 cri.go:89] found id: ""
	I1121 14:32:36.373667  306176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:32:36.382781  306176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:32:36.391811  306176 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:32:36.391866  306176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:32:36.401122  306176 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:32:36.401181  306176 kubeadm.go:158] found existing configuration files:
	
	I1121 14:32:36.401255  306176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:32:36.410119  306176 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:32:36.410182  306176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:32:36.418399  306176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:32:36.426323  306176 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:32:36.426373  306176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:32:36.434102  306176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:32:36.442326  306176 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:32:36.442395  306176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:32:36.450273  306176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:32:36.459365  306176 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:32:36.459473  306176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:32:36.468397  306176 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:32:36.509590  306176 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:32:36.509666  306176 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:32:36.532047  306176 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:32:36.532134  306176 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:32:36.532196  306176 kubeadm.go:319] OS: Linux
	I1121 14:32:36.532271  306176 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:32:36.532354  306176 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:32:36.532423  306176 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:32:36.532512  306176 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:32:36.532609  306176 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:32:36.532688  306176 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:32:36.532757  306176 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:32:36.532832  306176 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:32:36.597067  306176 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:32:36.597182  306176 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:32:36.597351  306176 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:32:36.603106  306176 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d591e340d2f0e       56cc512116c8f       9 seconds ago       Running             busybox                   0                   d10dadceab076       busybox                                      default
	9d688faa4a188       52546a367cc9e       15 seconds ago      Running             coredns                   0                   5c225eaea852f       coredns-66bc5c9577-r95cs                     kube-system
	b268167d64766       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   aea03caae917a       storage-provisioner                          kube-system
	57528421409cb       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   e5e5a727a1d24       kindnet-2dvsb                                kube-system
	cc3fa030dc8be       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   1955f297de7b9       kube-proxy-klwwh                             kube-system
	c28cd1c81ac68       c3994bc696102       39 seconds ago      Running             kube-apiserver            0                   0d62e1f70e929       kube-apiserver-embed-certs-013140            kube-system
	54e28dda6c675       7dd6aaa1717ab       39 seconds ago      Running             kube-scheduler            0                   40473058033b7       kube-scheduler-embed-certs-013140            kube-system
	39548d39886f2       5f1f5298c888d       39 seconds ago      Running             etcd                      0                   e1d73120044bd       etcd-embed-certs-013140                      kube-system
	9a9eb51d990bc       c80c8dbafe7dd       39 seconds ago      Running             kube-controller-manager   0                   1c4e9d3d48ab8       kube-controller-manager-embed-certs-013140   kube-system
	
	
	==> containerd <==
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.451399101Z" level=info msg="Container 9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.455635025Z" level=info msg="CreateContainer within sandbox \"aea03caae917afbb82795884d9216af32e8ad7de44695d9a4d107f60a478850b\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63\""
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.456275761Z" level=info msg="StartContainer for \"b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63\""
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.457145999Z" level=info msg="connecting to shim b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63" address="unix:///run/containerd/s/d372fb2bd22e97deb83f175bef45597e72b15123e7ac7e32e450b069e72f695d" protocol=ttrpc version=3
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.461046130Z" level=info msg="CreateContainer within sandbox \"5c225eaea852fa20c561288b44ce385c61d6ebf4a727c575091efca1a9519abb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c\""
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.461755938Z" level=info msg="StartContainer for \"9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c\""
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.462825411Z" level=info msg="connecting to shim 9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c" address="unix:///run/containerd/s/69dfb2196f79cd589449f2d5acdec8f7e0ed51201aebbfe6c8e86b40fa6ef1a0" protocol=ttrpc version=3
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.518215943Z" level=info msg="StartContainer for \"b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63\" returns successfully"
	Nov 21 14:32:24 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:24.525036985Z" level=info msg="StartContainer for \"9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c\" returns successfully"
	Nov 21 14:32:27 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:27.678623846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9ddebfb3-80d6-4623-aa37-0e3ce0fef04f,Namespace:default,Attempt:0,}"
	Nov 21 14:32:27 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:27.732313516Z" level=info msg="connecting to shim d10dadceab076784fbbf1d28eebe46e3b6ea7c6c5838d5380ccb30b746fa4e23" address="unix:///run/containerd/s/246f7f99563f34899197939e2c7996653dd5fc957b9f545ff16ca4b9a5c44f3f" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:32:27 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:27.817652828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9ddebfb3-80d6-4623-aa37-0e3ce0fef04f,Namespace:default,Attempt:0,} returns sandbox id \"d10dadceab076784fbbf1d28eebe46e3b6ea7c6c5838d5380ccb30b746fa4e23\""
	Nov 21 14:32:27 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:27.820444074Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:32:30 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:30.948955599Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.078920473Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.086700493Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.099055018Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.099761890Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 3.279261805s"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.099806084Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.310194796Z" level=info msg="CreateContainer within sandbox \"d10dadceab076784fbbf1d28eebe46e3b6ea7c6c5838d5380ccb30b746fa4e23\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.324627782Z" level=info msg="Container d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.332198529Z" level=info msg="CreateContainer within sandbox \"d10dadceab076784fbbf1d28eebe46e3b6ea7c6c5838d5380ccb30b746fa4e23\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96\""
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.333018874Z" level=info msg="StartContainer for \"d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96\""
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.333996419Z" level=info msg="connecting to shim d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96" address="unix:///run/containerd/s/246f7f99563f34899197939e2c7996653dd5fc957b9f545ff16ca4b9a5c44f3f" protocol=ttrpc version=3
	Nov 21 14:32:31 embed-certs-013140 containerd[680]: time="2025-11-21T14:32:31.398322649Z" level=info msg="StartContainer for \"d591e340d2f0eb49252281a085d5317c66c55c5ee7c56b7622881e226068be96\" returns successfully"
	
	
	==> coredns [9d688faa4a188abd2938dd2bcf1a9be1db6e99aa2c5a1fb8e1e458294a0cf60c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38664 - 27823 "HINFO IN 1053236482022564747.6316176946796434392. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023106102s
	
	
	==> describe nodes <==
	Name:               embed-certs-013140
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-013140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-013140
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_32_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:32:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-013140
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:32:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:32:38 +0000   Fri, 21 Nov 2025 14:32:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:32:38 +0000   Fri, 21 Nov 2025 14:32:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:32:38 +0000   Fri, 21 Nov 2025 14:32:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:32:38 +0000   Fri, 21 Nov 2025 14:32:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-013140
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                3e3eb59d-aa90-4836-9a30-3112c0cfe78d
	  Boot ID:                    f900700b-0668-4d24-87ff-85e15fbda365
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-r95cs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-embed-certs-013140                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-2dvsb                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-013140             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-013140    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-klwwh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-013140             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node embed-certs-013140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node embed-certs-013140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node embed-certs-013140 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node embed-certs-013140 event: Registered Node embed-certs-013140 in Controller
	  Normal  NodeReady                17s   kubelet          Node embed-certs-013140 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001887] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.440508] i8042: Warning: Keylock active
	[  +0.011202] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526419] block sda: the capability attribute has been deprecated.
	[  +0.095215] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027093] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.485024] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [39548d39886f2abc13857ea6d7e4107c5a04f203dfd462aaa6a28aaeafe921d8] <==
	{"level":"warn","ts":"2025-11-21T14:32:03.531488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.545195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.556379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.570037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.583259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.597073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.612311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.620636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.629035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.640249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.652416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.662706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.674097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.682637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.693485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.700167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.716996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.734380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.744205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.766928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.774944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.784893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:32:03.879993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:32:30.696496Z","caller":"traceutil/trace.go:172","msg":"trace[296025383] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"112.432783ms","start":"2025-11-21T14:32:30.584035Z","end":"2025-11-21T14:32:30.696468Z","steps":["trace[296025383] 'process raft request'  (duration: 112.293378ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:32:31.309673Z","caller":"traceutil/trace.go:172","msg":"trace[2002741528] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"206.67551ms","start":"2025-11-21T14:32:31.102977Z","end":"2025-11-21T14:32:31.309653Z","steps":["trace[2002741528] 'process raft request'  (duration: 206.49746ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:32:40 up  1:15,  0 user,  load average: 7.06, 4.45, 2.59
	Linux embed-certs-013140 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [57528421409cbadeb4ad18e0303c003d7c895e53c564ffdfd2782a8ab1d94fcb] <==
	I1121 14:32:13.724409       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:32:13.724768       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:32:13.724913       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:32:13.724932       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:32:13.724960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:32:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:32:13.929945       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:32:13.930001       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:32:13.930014       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:32:13.930424       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:32:14.330329       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:32:14.330373       1 metrics.go:72] Registering metrics
	I1121 14:32:14.330439       1 controller.go:711] "Syncing nftables rules"
	I1121 14:32:23.935644       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:32:23.935726       1 main.go:301] handling current node
	I1121 14:32:33.930162       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:32:33.930231       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c28cd1c81ac68983e505ac429c0cca2766edfad182ab4d03de412efd4de8c0dc] <==
	I1121 14:32:04.712101       1 policy_source.go:240] refreshing policies
	I1121 14:32:04.754340       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:32:04.766374       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:32:04.766435       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:32:04.783248       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:32:04.795048       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:32:04.893274       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:32:05.556567       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:32:05.562137       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:32:05.562163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:32:06.353138       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:32:06.402664       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:32:06.461828       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:32:06.469117       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:32:06.470535       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:32:06.475423       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:32:06.592087       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:32:07.542169       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:32:07.556112       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:32:07.566259       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:32:12.294249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:32:12.493882       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:32:12.596820       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:32:12.602122       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1121 14:32:37.477688       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:57386: use of closed network connection
	
	
	==> kube-controller-manager [9a9eb51d990bc2e4a764df8db8231e9787d888f74bc19b7b106cfd760e0c6af8] <==
	I1121 14:32:11.589797       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:32:11.589812       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:32:11.589855       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:32:11.589984       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:32:11.590039       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:32:11.590061       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:32:11.590073       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:32:11.590325       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:32:11.590672       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:32:11.590702       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:32:11.590704       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:32:11.591376       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:32:11.591479       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:32:11.592710       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:32:11.592730       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:32:11.592776       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:32:11.594922       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:32:11.594949       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:32:11.595001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:32:11.600213       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:32:11.600327       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:32:11.600462       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-013140"
	I1121 14:32:11.600522       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:32:11.619243       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:32:26.603399       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cc3fa030dc8becef2dcdc972ffd0ba9cb33d830c81de3f653c5f4ebd31c86d22] <==
	I1121 14:32:13.141810       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:32:13.211553       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:32:13.313201       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:32:13.313249       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:32:13.313372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:32:13.341921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:32:13.341989       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:32:13.347861       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:32:13.348290       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:32:13.348330       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:32:13.352176       1 config.go:200] "Starting service config controller"
	I1121 14:32:13.352253       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:32:13.352322       1 config.go:309] "Starting node config controller"
	I1121 14:32:13.352404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:32:13.352406       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:32:13.352411       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:32:13.352417       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:32:13.352427       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:32:13.352433       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:32:13.452526       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:32:13.452578       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:32:13.452607       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [54e28dda6c6755422e5eedf01330c92fd943dbaf8692fc68be473166adf0d43c] <==
	E1121 14:32:04.681593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:32:04.685894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:32:04.686015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:32:04.686080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:32:04.686139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:32:04.687128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:32:04.688318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:32:04.690866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:32:05.492583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:32:05.492961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:32:05.621886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:32:05.642133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:32:05.677709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:32:05.713028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:32:05.748368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:32:05.761303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:32:05.774744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:32:05.792197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:32:05.794825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:32:05.833145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:32:05.911750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:32:05.973812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:32:06.089054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:32:06.227361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1121 14:32:09.051730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:32:08 embed-certs-013140 kubelet[1495]: I1121 14:32:08.539052    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-013140" podStartSLOduration=1.539029025 podStartE2EDuration="1.539029025s" podCreationTimestamp="2025-11-21 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:08.53414193 +0000 UTC m=+1.187038139" watchObservedRunningTime="2025-11-21 14:32:08.539029025 +0000 UTC m=+1.191925229"
	Nov 21 14:32:08 embed-certs-013140 kubelet[1495]: I1121 14:32:08.573151    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-013140" podStartSLOduration=1.573123026 podStartE2EDuration="1.573123026s" podCreationTimestamp="2025-11-21 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:08.571715125 +0000 UTC m=+1.224611332" watchObservedRunningTime="2025-11-21 14:32:08.573123026 +0000 UTC m=+1.226019230"
	Nov 21 14:32:08 embed-certs-013140 kubelet[1495]: I1121 14:32:08.573376    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-013140" podStartSLOduration=1.573367548 podStartE2EDuration="1.573367548s" podCreationTimestamp="2025-11-21 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:08.554406099 +0000 UTC m=+1.207302309" watchObservedRunningTime="2025-11-21 14:32:08.573367548 +0000 UTC m=+1.226263759"
	Nov 21 14:32:11 embed-certs-013140 kubelet[1495]: I1121 14:32:11.634929    1495 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:32:11 embed-certs-013140 kubelet[1495]: I1121 14:32:11.635618    1495 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:32:11 embed-certs-013140 kubelet[1495]: I1121 14:32:11.704389    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-013140" podStartSLOduration=4.70437242 podStartE2EDuration="4.70437242s" podCreationTimestamp="2025-11-21 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:08.586823066 +0000 UTC m=+1.239719292" watchObservedRunningTime="2025-11-21 14:32:11.70437242 +0000 UTC m=+4.357268629"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574316    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a583a7c-33a2-41cf-a1f9-cf86db9bd461-kube-proxy\") pod \"kube-proxy-klwwh\" (UID: \"5a583a7c-33a2-41cf-a1f9-cf86db9bd461\") " pod="kube-system/kube-proxy-klwwh"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574363    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3a733ace-4ace-47c9-b6b9-8e5f65933c49-cni-cfg\") pod \"kindnet-2dvsb\" (UID: \"3a733ace-4ace-47c9-b6b9-8e5f65933c49\") " pod="kube-system/kindnet-2dvsb"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574380    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a583a7c-33a2-41cf-a1f9-cf86db9bd461-xtables-lock\") pod \"kube-proxy-klwwh\" (UID: \"5a583a7c-33a2-41cf-a1f9-cf86db9bd461\") " pod="kube-system/kube-proxy-klwwh"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574395    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6kj\" (UniqueName: \"kubernetes.io/projected/5a583a7c-33a2-41cf-a1f9-cf86db9bd461-kube-api-access-7c6kj\") pod \"kube-proxy-klwwh\" (UID: \"5a583a7c-33a2-41cf-a1f9-cf86db9bd461\") " pod="kube-system/kube-proxy-klwwh"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574484    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a733ace-4ace-47c9-b6b9-8e5f65933c49-lib-modules\") pod \"kindnet-2dvsb\" (UID: \"3a733ace-4ace-47c9-b6b9-8e5f65933c49\") " pod="kube-system/kindnet-2dvsb"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574553    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75hv\" (UniqueName: \"kubernetes.io/projected/3a733ace-4ace-47c9-b6b9-8e5f65933c49-kube-api-access-k75hv\") pod \"kindnet-2dvsb\" (UID: \"3a733ace-4ace-47c9-b6b9-8e5f65933c49\") " pod="kube-system/kindnet-2dvsb"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574593    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a583a7c-33a2-41cf-a1f9-cf86db9bd461-lib-modules\") pod \"kube-proxy-klwwh\" (UID: \"5a583a7c-33a2-41cf-a1f9-cf86db9bd461\") " pod="kube-system/kube-proxy-klwwh"
	Nov 21 14:32:12 embed-certs-013140 kubelet[1495]: I1121 14:32:12.574615    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a733ace-4ace-47c9-b6b9-8e5f65933c49-xtables-lock\") pod \"kindnet-2dvsb\" (UID: \"3a733ace-4ace-47c9-b6b9-8e5f65933c49\") " pod="kube-system/kindnet-2dvsb"
	Nov 21 14:32:13 embed-certs-013140 kubelet[1495]: I1121 14:32:13.511376    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-klwwh" podStartSLOduration=1.511351405 podStartE2EDuration="1.511351405s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:13.511215762 +0000 UTC m=+6.164111971" watchObservedRunningTime="2025-11-21 14:32:13.511351405 +0000 UTC m=+6.164247618"
	Nov 21 14:32:13 embed-certs-013140 kubelet[1495]: I1121 14:32:13.548498    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2dvsb" podStartSLOduration=1.548470568 podStartE2EDuration="1.548470568s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:13.526393617 +0000 UTC m=+6.179289809" watchObservedRunningTime="2025-11-21 14:32:13.548470568 +0000 UTC m=+6.201366776"
	Nov 21 14:32:23 embed-certs-013140 kubelet[1495]: I1121 14:32:23.974827    1495 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.051600    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcf2q\" (UniqueName: \"kubernetes.io/projected/9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec-kube-api-access-kcf2q\") pod \"storage-provisioner\" (UID: \"9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec\") " pod="kube-system/storage-provisioner"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.051662    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vcm\" (UniqueName: \"kubernetes.io/projected/f98cd5f5-83b2-4a40-b75d-868145de6f36-kube-api-access-b7vcm\") pod \"coredns-66bc5c9577-r95cs\" (UID: \"f98cd5f5-83b2-4a40-b75d-868145de6f36\") " pod="kube-system/coredns-66bc5c9577-r95cs"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.051680    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec-tmp\") pod \"storage-provisioner\" (UID: \"9db1ef7d-dbf5-4749-b1b5-f6784f22c0ec\") " pod="kube-system/storage-provisioner"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.051699    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f98cd5f5-83b2-4a40-b75d-868145de6f36-config-volume\") pod \"coredns-66bc5c9577-r95cs\" (UID: \"f98cd5f5-83b2-4a40-b75d-868145de6f36\") " pod="kube-system/coredns-66bc5c9577-r95cs"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.543186    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r95cs" podStartSLOduration=12.543164953 podStartE2EDuration="12.543164953s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:24.542742189 +0000 UTC m=+17.195638398" watchObservedRunningTime="2025-11-21 14:32:24.543164953 +0000 UTC m=+17.196061162"
	Nov 21 14:32:24 embed-certs-013140 kubelet[1495]: I1121 14:32:24.557880    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.55785693 podStartE2EDuration="12.55785693s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:32:24.557841468 +0000 UTC m=+17.210737677" watchObservedRunningTime="2025-11-21 14:32:24.55785693 +0000 UTC m=+17.210753137"
	Nov 21 14:32:27 embed-certs-013140 kubelet[1495]: I1121 14:32:27.475809    1495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s4x8\" (UniqueName: \"kubernetes.io/projected/9ddebfb3-80d6-4623-aa37-0e3ce0fef04f-kube-api-access-4s4x8\") pod \"busybox\" (UID: \"9ddebfb3-80d6-4623-aa37-0e3ce0fef04f\") " pod="default/busybox"
	Nov 21 14:32:31 embed-certs-013140 kubelet[1495]: I1121 14:32:31.566619    1495 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.2856899849999999 podStartE2EDuration="4.566594953s" podCreationTimestamp="2025-11-21 14:32:27 +0000 UTC" firstStartedPulling="2025-11-21 14:32:27.819866709 +0000 UTC m=+20.472762975" lastFinishedPulling="2025-11-21 14:32:31.10077175 +0000 UTC m=+23.753667943" observedRunningTime="2025-11-21 14:32:31.566109697 +0000 UTC m=+24.219005905" watchObservedRunningTime="2025-11-21 14:32:31.566594953 +0000 UTC m=+24.219491161"
	
	
	==> storage-provisioner [b268167d647665e45fecf0cb0cf73ef2acbadd9ac67be3f37d3acd56aa119f63] <==
	I1121 14:32:24.537721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:32:24.542268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:24.549712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:32:24.550229       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:32:24.550603       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-013140_b205943d-9003-4c7a-8a73-53a83151b14f!
	I1121 14:32:24.551701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ae412e9-189e-4d61-b533-a5ccb87a6e9d", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-013140_b205943d-9003-4c7a-8a73-53a83151b14f became leader
	W1121 14:32:24.553713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:24.558713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:32:24.651314       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-013140_b205943d-9003-4c7a-8a73-53a83151b14f!
	W1121 14:32:26.562195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:26.569153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:28.572609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:28.577680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:30.581421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:30.697747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:32.702001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:32.707365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:34.710627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:34.716319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:36.720402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:36.725468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:38.729827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:38.735318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:40.739575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:32:40.745299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-013140 -n embed-certs-013140
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-013140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.30s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 17.29
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 11.28
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.85
22 TestOffline 54.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 129.34
29 TestAddons/serial/Volcano 38.38
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.48
35 TestAddons/parallel/Registry 15.53
36 TestAddons/parallel/RegistryCreds 0.73
37 TestAddons/parallel/Ingress 20.3
38 TestAddons/parallel/InspektorGadget 10.66
39 TestAddons/parallel/MetricsServer 5.63
41 TestAddons/parallel/CSI 46.88
42 TestAddons/parallel/Headlamp 21.41
43 TestAddons/parallel/CloudSpanner 5.54
44 TestAddons/parallel/LocalPath 54.76
45 TestAddons/parallel/NvidiaDevicePlugin 6.52
46 TestAddons/parallel/Yakd 10.7
47 TestAddons/parallel/AmdGpuDevicePlugin 5.53
48 TestAddons/StoppedEnableDisable 12.33
49 TestCertOptions 26.57
50 TestCertExpiration 226.01
52 TestForceSystemdFlag 28.04
53 TestForceSystemdEnv 37.66
54 TestDockerEnvContainerd 37.33
58 TestErrorSpam/setup 21.24
59 TestErrorSpam/start 0.69
60 TestErrorSpam/status 0.98
61 TestErrorSpam/pause 1.48
62 TestErrorSpam/unpause 1.56
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.37
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.47
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
75 TestFunctional/serial/CacheCmd/cache/add_local 1.95
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 38.55
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.28
86 TestFunctional/serial/LogsFileCmd 1.31
87 TestFunctional/serial/InvalidService 4.15
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 9.84
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.06
97 TestFunctional/parallel/ServiceCmdConnect 10.51
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 32.33
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 1.86
103 TestFunctional/parallel/MySQL 28.76
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.85
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.39
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.22
120 TestFunctional/parallel/ServiceCmd/List 0.52
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
123 TestFunctional/parallel/ServiceCmd/Format 0.41
124 TestFunctional/parallel/ServiceCmd/URL 0.41
125 TestFunctional/parallel/Version/short 0.07
126 TestFunctional/parallel/Version/components 0.53
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
131 TestFunctional/parallel/ImageCommands/ImageBuild 4.97
132 TestFunctional/parallel/ImageCommands/Setup 1.93
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
144 TestFunctional/parallel/ProfileCmd/profile_list 0.51
145 TestFunctional/parallel/MountCmd/any-port 8.17
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.15
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.42
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
153 TestFunctional/parallel/MountCmd/specific-port 1.86
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.89
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 114.74
163 TestMultiControlPlane/serial/DeployApp 5.51
164 TestMultiControlPlane/serial/PingHostFromPods 1.19
165 TestMultiControlPlane/serial/AddWorkerNode 24.24
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 17.44
169 TestMultiControlPlane/serial/StopSecondaryNode 12.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.39
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.17
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.51
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 36.21
177 TestMultiControlPlane/serial/RestartCluster 55.22
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
179 TestMultiControlPlane/serial/AddSecondaryNode 75.95
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
185 TestJSONOutput/start/Command 38.83
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.76
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.61
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 33.79
211 TestKicCustomNetwork/use_default_bridge_network 23.43
212 TestKicExistingNetwork 26.79
213 TestKicCustomSubnet 24.11
214 TestKicStaticIP 27.61
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 49.6
219 TestMountStart/serial/StartWithMountFirst 4.66
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.24
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 7.6
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 62.7
231 TestMultiNode/serial/DeployApp2Nodes 5.19
232 TestMultiNode/serial/PingHostFrom2Pods 0.82
233 TestMultiNode/serial/AddNode 23.97
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 9.99
237 TestMultiNode/serial/StopNode 2.3
238 TestMultiNode/serial/StartAfterStop 7.02
239 TestMultiNode/serial/RestartKeepsNodes 78.77
240 TestMultiNode/serial/DeleteNode 5.28
241 TestMultiNode/serial/StopMultiNode 24.13
242 TestMultiNode/serial/RestartMultiNode 44.81
243 TestMultiNode/serial/ValidateNameConflict 27.45
248 TestPreload 114.07
250 TestScheduledStopUnix 97.87
253 TestInsufficientStorage 9.96
254 TestRunningBinaryUpgrade 47.3
256 TestKubernetesUpgrade 328.22
257 TestMissingContainerUpgrade 136.71
259 TestPause/serial/Start 51.48
260 TestPause/serial/SecondStartNoReconfiguration 6.17
261 TestPause/serial/Pause 1.12
262 TestPause/serial/VerifyStatus 0.33
263 TestPause/serial/Unpause 0.91
264 TestPause/serial/PauseAgain 1.13
265 TestPause/serial/DeletePaused 3.04
266 TestPause/serial/VerifyDeletedResources 0.62
267 TestStoppedBinaryUpgrade/Setup 2.6
268 TestStoppedBinaryUpgrade/Upgrade 91.94
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
279 TestNoKubernetes/serial/StartWithK8s 21.97
280 TestNoKubernetes/serial/StartWithStopK8s 22.68
288 TestNetworkPlugins/group/false 3.92
292 TestNoKubernetes/serial/Start 7.98
293 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
295 TestNoKubernetes/serial/ProfileList 1.68
296 TestNoKubernetes/serial/Stop 2.31
297 TestNoKubernetes/serial/StartNoArgs 7.23
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
300 TestStartStop/group/old-k8s-version/serial/FirstStart 56.52
302 TestStartStop/group/no-preload/serial/FirstStart 55.34
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.35
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
309 TestStartStop/group/old-k8s-version/serial/Stop 12.16
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
313 TestStartStop/group/no-preload/serial/Stop 12.09
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
315 TestStartStop/group/old-k8s-version/serial/SecondStart 49.56
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.92
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
319 TestStartStop/group/no-preload/serial/SecondStart 55.78
320 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
325 TestStartStop/group/newest-cni/serial/FirstStart 31.91
326 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
327 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.21
328 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
329 TestStartStop/group/old-k8s-version/serial/Pause 3.41
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/embed-certs/serial/FirstStart 45.36
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
334 TestNetworkPlugins/group/auto/Start 45.67
335 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
336 TestStartStop/group/no-preload/serial/Pause 3.57
337 TestNetworkPlugins/group/kindnet/Start 46.22
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
340 TestStartStop/group/newest-cni/serial/Stop 1.34
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
342 TestStartStop/group/newest-cni/serial/SecondStart 12.06
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
346 TestStartStop/group/newest-cni/serial/Pause 2.9
347 TestNetworkPlugins/group/calico/Start 52.52
349 TestNetworkPlugins/group/auto/KubeletFlags 0.32
350 TestNetworkPlugins/group/auto/NetCatPod 11.47
351 TestNetworkPlugins/group/auto/DNS 0.16
352 TestNetworkPlugins/group/auto/Localhost 0.12
353 TestNetworkPlugins/group/auto/HairPin 0.14
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
356 TestStartStop/group/embed-certs/serial/Stop 12.61
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
358 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
360 TestStartStop/group/embed-certs/serial/SecondStart 52.35
361 TestNetworkPlugins/group/kindnet/DNS 0.17
362 TestNetworkPlugins/group/kindnet/Localhost 0.17
363 TestNetworkPlugins/group/kindnet/HairPin 0.16
364 TestNetworkPlugins/group/custom-flannel/Start 56.04
365 TestNetworkPlugins/group/calico/ControllerPod 5.08
366 TestNetworkPlugins/group/enable-default-cni/Start 60.33
367 TestNetworkPlugins/group/calico/KubeletFlags 0.51
368 TestNetworkPlugins/group/calico/NetCatPod 11.19
369 TestNetworkPlugins/group/calico/DNS 0.15
370 TestNetworkPlugins/group/calico/Localhost 0.14
371 TestNetworkPlugins/group/calico/HairPin 0.14
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
374 TestNetworkPlugins/group/flannel/Start 50.41
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
378 TestStartStop/group/embed-certs/serial/Pause 3.67
379 TestNetworkPlugins/group/bridge/Start 65.32
380 TestNetworkPlugins/group/custom-flannel/DNS 0.18
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.24
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
390 TestNetworkPlugins/group/flannel/NetCatPod 9.18
391 TestNetworkPlugins/group/flannel/DNS 0.13
392 TestNetworkPlugins/group/flannel/Localhost 0.12
393 TestNetworkPlugins/group/flannel/HairPin 0.11
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
395 TestNetworkPlugins/group/bridge/NetCatPod 9.22
396 TestNetworkPlugins/group/bridge/DNS 0.18
397 TestNetworkPlugins/group/bridge/Localhost 0.12
398 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (17.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-196998 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-196998 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.288465115s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (17.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 13:56:09.240122   14523 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1121 13:56:09.240216   14523 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-196998
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-196998: exit status 85 (75.202602ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-196998 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-196998 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:55:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:55:52.004198   14535 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:55:52.004463   14535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:52.004472   14535 out.go:374] Setting ErrFile to fd 2...
	I1121 13:55:52.004476   14535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:52.004677   14535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	W1121 13:55:52.004799   14535 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21847-11004/.minikube/config/config.json: open /home/jenkins/minikube-integration/21847-11004/.minikube/config/config.json: no such file or directory
	I1121 13:55:52.005272   14535 out.go:368] Setting JSON to true
	I1121 13:55:52.006160   14535 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2294,"bootTime":1763731058,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 13:55:52.006256   14535 start.go:143] virtualization: kvm guest
	I1121 13:55:52.008709   14535 out.go:99] [download-only-196998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1121 13:55:52.008864   14535 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 13:55:52.008883   14535 notify.go:221] Checking for updates...
	I1121 13:55:52.010287   14535 out.go:171] MINIKUBE_LOCATION=21847
	I1121 13:55:52.011880   14535 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:55:52.013572   14535 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 13:55:52.015135   14535 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 13:55:52.016478   14535 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 13:55:52.018855   14535 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 13:55:52.019125   14535 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:55:52.043815   14535 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 13:55:52.043881   14535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:52.449499   14535 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-21 13:55:52.439330138 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:52.449636   14535 docker.go:319] overlay module found
	I1121 13:55:52.451340   14535 out.go:99] Using the docker driver based on user configuration
	I1121 13:55:52.451378   14535 start.go:309] selected driver: docker
	I1121 13:55:52.451384   14535 start.go:930] validating driver "docker" against <nil>
	I1121 13:55:52.451482   14535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:52.512741   14535 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-21 13:55:52.502257352 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:52.512891   14535 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:55:52.513422   14535 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1121 13:55:52.513639   14535 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 13:55:52.515518   14535 out.go:171] Using Docker driver with root privileges
	I1121 13:55:52.517244   14535 cni.go:84] Creating CNI manager for ""
	I1121 13:55:52.517314   14535 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 13:55:52.517329   14535 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:55:52.517403   14535 start.go:353] cluster config:
	{Name:download-only-196998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-196998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:55:52.518802   14535 out.go:99] Starting "download-only-196998" primary control-plane node in "download-only-196998" cluster
	I1121 13:55:52.518837   14535 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 13:55:52.520092   14535 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:55:52.520139   14535 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 13:55:52.520255   14535 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:55:52.537845   14535 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:55:52.538074   14535 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:55:52.538190   14535 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:55:52.610992   14535 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1121 13:55:52.611019   14535 cache.go:65] Caching tarball of preloaded images
	I1121 13:55:52.611190   14535 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 13:55:52.613135   14535 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1121 13:55:52.613178   14535 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1121 13:55:52.714584   14535 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1121 13:55:52.714742   14535 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1121 13:56:03.527162   14535 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1121 13:56:03.527637   14535 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/download-only-196998/config.json ...
	I1121 13:56:03.527682   14535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/download-only-196998/config.json: {Name:mkcf919910974d994751842e23cbea5503c884bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:03.527891   14535 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 13:56:03.528148   14535 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-196998 host does not exist
	  To start a cluster, run: "minikube start -p download-only-196998"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-196998
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-128199 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-128199 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.275549848s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 13:56:20.985986   14523 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1121 13:56:20.986015   14523 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-128199
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-128199: exit status 85 (79.835098ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-196998 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-196998 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ delete  │ -p download-only-196998                                                                                                                                                               │ download-only-196998 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ start   │ -o=json --download-only -p download-only-128199 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-128199 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:56:09
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:56:09.762475   14941 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:56:09.762625   14941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:09.762638   14941 out.go:374] Setting ErrFile to fd 2...
	I1121 13:56:09.762644   14941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:09.762836   14941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 13:56:09.763301   14941 out.go:368] Setting JSON to true
	I1121 13:56:09.764206   14941 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2312,"bootTime":1763731058,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 13:56:09.764301   14941 start.go:143] virtualization: kvm guest
	I1121 13:56:09.766135   14941 out.go:99] [download-only-128199] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 13:56:09.766322   14941 notify.go:221] Checking for updates...
	I1121 13:56:09.767704   14941 out.go:171] MINIKUBE_LOCATION=21847
	I1121 13:56:09.768911   14941 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:56:09.770416   14941 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 13:56:09.774817   14941 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 13:56:09.776446   14941 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 13:56:09.779536   14941 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 13:56:09.779877   14941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:56:09.803576   14941 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 13:56:09.803683   14941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:09.862493   14941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 13:56:09.852460641 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:56:09.862628   14941 docker.go:319] overlay module found
	I1121 13:56:09.864294   14941 out.go:99] Using the docker driver based on user configuration
	I1121 13:56:09.864331   14941 start.go:309] selected driver: docker
	I1121 13:56:09.864337   14941 start.go:930] validating driver "docker" against <nil>
	I1121 13:56:09.864423   14941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:09.923749   14941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 13:56:09.914625704 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:56:09.923899   14941 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:56:09.924383   14941 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1121 13:56:09.924536   14941 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 13:56:09.926424   14941 out.go:171] Using Docker driver with root privileges
	I1121 13:56:09.927736   14941 cni.go:84] Creating CNI manager for ""
	I1121 13:56:09.927792   14941 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 13:56:09.927804   14941 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:56:09.927859   14941 start.go:353] cluster config:
	{Name:download-only-128199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-128199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:56:09.929400   14941 out.go:99] Starting "download-only-128199" primary control-plane node in "download-only-128199" cluster
	I1121 13:56:09.929418   14941 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 13:56:09.930689   14941 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:56:09.930719   14941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 13:56:09.930826   14941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:56:09.947601   14941 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:09.947762   14941 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:56:09.947787   14941 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1121 13:56:09.947792   14941 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1121 13:56:09.947802   14941 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1121 13:56:10.271461   14941 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 13:56:10.271520   14941 cache.go:65] Caching tarball of preloaded images
	I1121 13:56:10.271755   14941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 13:56:10.273606   14941 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1121 13:56:10.273629   14941 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1121 13:56:10.369824   14941 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1121 13:56:10.369872   14941 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21847-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-128199 host does not exist
	  To start a cluster, run: "minikube start -p download-only-128199"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-128199
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-954324 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-954324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-954324
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 13:56:22.184670   14523 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-110890 --alsologtostderr --binary-mirror http://127.0.0.1:38521 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-110890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-110890
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestOffline (54.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-324396 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-324396 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (51.486232581s)
helpers_test.go:175: Cleaning up "offline-containerd-324396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-324396
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-324396: (2.884960465s)
--- PASS: TestOffline (54.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-520558
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-520558: exit status 85 (66.324267ms)

                                                
                                                
-- stdout --
	* Profile "addons-520558" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-520558"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-520558
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-520558: exit status 85 (66.656549ms)

                                                
                                                
-- stdout --
	* Profile "addons-520558" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-520558"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (129.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-520558 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-520558 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.338609893s)
--- PASS: TestAddons/Setup (129.34s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.38s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 17.784807ms
addons_test.go:876: volcano-admission stabilized in 17.841378ms
addons_test.go:868: volcano-scheduler stabilized in 17.87222ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-4kc67" [902763a5-4924-45c5-92d3-4f70fc2357b2] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004184162s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-hrgxg" [f976a735-3ceb-4dc4-ad83-fe98cb7b01b2] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003779579s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-7tq6j" [5dff21ad-83f2-45c8-8628-051c7aa3138e] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004234564s
addons_test.go:903: (dbg) Run:  kubectl --context addons-520558 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-520558 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-520558 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [e922af0e-87c6-4e2b-9592-e62b49f175d7] Pending
helpers_test.go:352: "test-job-nginx-0" [e922af0e-87c6-4e2b-9592-e62b49f175d7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [e922af0e-87c6-4e2b-9592-e62b49f175d7] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.004036262s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-520558 addons disable volcano --alsologtostderr -v=1: (12.001937928s)
--- PASS: TestAddons/serial/Volcano (38.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-520558 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-520558 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-520558 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-520558 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [92d34093-607f-4d4d-b0fc-dc20b6f57fd4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [92d34093-607f-4d4d-b0fc-dc20b6f57fd4] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003394069s
addons_test.go:694: (dbg) Run:  kubectl --context addons-520558 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-520558 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-520558 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 15.736886ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-hcwm8" [d3a7ba5e-69a2-4079-927d-4f17e0aa6b85] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00344672s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-npvss" [9aea9204-cf77-4e0a-ae00-5bad86147699] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004360778s
addons_test.go:392: (dbg) Run:  kubectl --context addons-520558 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-520558 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-520558 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.700529819s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 ip
2025/11/21 13:59:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.03424ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-520558
addons_test.go:332: (dbg) Run:  kubectl --context addons-520558 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-520558 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-520558 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-520558 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [6e0c049e-98df-4886-979c-2620290e9194] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [6e0c049e-98df-4886-979c-2620290e9194] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003346421s
I1121 14:00:07.437395   14523 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-520558 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-520558 addons disable ingress-dns --alsologtostderr -v=1: (1.090500329s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-520558 addons disable ingress --alsologtostderr -v=1: (7.819618309s)
--- PASS: TestAddons/parallel/Ingress (20.30s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-4g7pc" [aec82862-e4a5-4c66-bead-2df588bb7a8c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003743493s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-520558 addons disable inspektor-gadget --alsologtostderr -v=1: (5.65973319s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.48805ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-b4fff" [9975c905-165e-4ea4-ad84-684012bbbe6d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003817034s
addons_test.go:463: (dbg) Run:  kubectl --context addons-520558 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.271613ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-520558 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-520558 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b837d938-f5aa-4875-975f-768378949f48] Pending
helpers_test.go:352: "task-pv-pod" [b837d938-f5aa-4875-975f-768378949f48] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b837d938-f5aa-4875-975f-768378949f48] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003427127s
addons_test.go:572: (dbg) Run:  kubectl --context addons-520558 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-520558 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-520558 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-520558 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-520558 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-520558 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-520558 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b69f1e91-1f59-44d2-abc4-cdae9457ffd3] Pending
helpers_test.go:352: "task-pv-pod-restore" [b69f1e91-1f59-44d2-abc4-cdae9457ffd3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b69f1e91-1f59-44d2-abc4-cdae9457ffd3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003246993s
addons_test.go:614: (dbg) Run:  kubectl --context addons-520558 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-520558 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-520558 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-520558 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.744725528s)
--- PASS: TestAddons/parallel/CSI (46.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-520558 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-zv59t" [f84a58c7-063f-402b-87ec-fdeba50637e2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-zv59t" [f84a58c7-063f-402b-87ec-fdeba50637e2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003171097s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-520558 addons disable headlamp --alsologtostderr -v=1: (5.645131529s)
--- PASS: TestAddons/parallel/Headlamp (21.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-f6sd2" [6a457990-8419-4343-b015-fe644f93f84a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004139015s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.76s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-520558 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-520558 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-520558 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [07898ce3-b171-4e58-abaf-e0293bf28849] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [07898ce3-b171-4e58-abaf-e0293bf28849] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [07898ce3-b171-4e58-abaf-e0293bf28849] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003321889s
addons_test.go:967: (dbg) Run:  kubectl --context addons-520558 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 ssh "cat /opt/local-path-provisioner/pvc-d7a8c595-96bb-4020-8fa9-aeeb5c8d36a9_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-520558 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-520558 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-520558 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.865187774s)
--- PASS: TestAddons/parallel/LocalPath (54.76s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7d9qw" [d9f71a44-9269-4455-8733-cfe5e0f557e5] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004164073s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-kr9mn" [90ccfe26-7a15-437a-ab12-3a2b0de14a20] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003722165s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-520558 addons disable yakd --alsologtostderr -v=1: (5.697887512s)
--- PASS: TestAddons/parallel/Yakd (10.70s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
I1121 13:59:29.335336   14523 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-sq644" [1c544edd-2623-4da4-882c-7d96c013668f] Running
I1121 13:59:29.338521   14523 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 13:59:29.338567   14523 kapi.go:107] duration metric: took 3.254187ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004051055s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-520558 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-520558
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-520558: (12.030626273s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-520558
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-520558
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-520558
--- PASS: TestAddons/StoppedEnableDisable (12.33s)

                                                
                                    
x
+
TestCertOptions (26.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-733993 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-733993 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.521325413s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-733993 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-733993 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-733993 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-733993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-733993
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-733993: (4.179464131s)
--- PASS: TestCertOptions (26.57s)

                                                
                                    
x
+
TestCertExpiration (226.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-371956 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-371956 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (35.5007348s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-371956 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-371956 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.960207164s)
helpers_test.go:175: Cleaning up "cert-expiration-371956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-371956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-371956: (4.547525867s)
--- PASS: TestCertExpiration (226.01s)

                                                
                                    
x
+
TestForceSystemdFlag (28.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-730471 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-730471 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.150477978s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-730471 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-730471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-730471
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-730471: (2.58721791s)
--- PASS: TestForceSystemdFlag (28.04s)

                                                
                                    
x
+
TestForceSystemdEnv (37.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-359032 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-359032 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.693238563s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-359032 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-359032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-359032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-359032: (2.592802143s)
--- PASS: TestForceSystemdEnv (37.66s)

                                                
                                    
x
+
TestDockerEnvContainerd (37.33s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-344708 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-344708 --driver=docker  --container-runtime=containerd: (21.519839373s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-344708"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-344708": (1.001925317s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXg5n3Pb/agent.38062" SSH_AGENT_PID="38063" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXg5n3Pb/agent.38062" SSH_AGENT_PID="38063" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXg5n3Pb/agent.38062" SSH_AGENT_PID="38063" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.89653707s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXg5n3Pb/agent.38062" SSH_AGENT_PID="38063" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-344708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-344708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-344708: (1.959101862s)
--- PASS: TestDockerEnvContainerd (37.33s)

                                                
                                    
x
+
TestErrorSpam/setup (21.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-363931 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-363931 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-363931 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-363931 --driver=docker  --container-runtime=containerd: (21.23494731s)
--- PASS: TestErrorSpam/setup (21.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 stop: (1.299626573s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363931 --log_dir /tmp/nospam-363931 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21847-11004/.minikube/files/etc/test/nested/copy/14523/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-565315 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-565315 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (37.373086421s)
--- PASS: TestFunctional/serial/StartWithProxy (37.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1121 14:02:28.959889   14523 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-565315 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-565315 --alsologtostderr -v=8: (6.464943927s)
functional_test.go:678: soft start took 6.46588885s for "functional-565315" cluster.
I1121 14:02:35.425575   14523 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-565315 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-565315 cache add registry.k8s.io/pause:3.1: (1.093350572s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-565315 cache add registry.k8s.io/pause:3.3: (1.093235188s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-565315 /tmp/TestFunctionalserialCacheCmdcacheadd_local1895304557/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cache add minikube-local-cache-test:functional-565315
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-565315 cache add minikube-local-cache-test:functional-565315: (1.584212097s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cache delete minikube-local-cache-test:functional-565315
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-565315
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.983349ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 kubectl -- --context functional-565315 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-565315 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-565315 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-565315 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.5469823s)
functional_test.go:776: restart took 38.547102342s for "functional-565315" cluster.
I1121 14:03:21.607525   14523 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (38.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-565315 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-565315 logs: (1.274974369s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 logs --file /tmp/TestFunctionalserialLogsFileCmd3890402349/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-565315 logs --file /tmp/TestFunctionalserialLogsFileCmd3890402349/001/logs.txt: (1.308480356s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-565315 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-565315
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-565315: exit status 115 (349.02987ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32449 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-565315 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 config get cpus: exit status 14 (98.285668ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 config get cpus: exit status 14 (99.186371ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-565315 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-565315 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 58733: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-565315 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-565315 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (179.677943ms)

                                                
                                                
-- stdout --
	* [functional-565315] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:03:43.467291   57613 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:03:43.467566   57613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:03:43.467577   57613 out.go:374] Setting ErrFile to fd 2...
	I1121 14:03:43.467582   57613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:03:43.467796   57613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:03:43.468233   57613 out.go:368] Setting JSON to false
	I1121 14:03:43.469249   57613 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2765,"bootTime":1763731058,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:03:43.469348   57613 start.go:143] virtualization: kvm guest
	I1121 14:03:43.471561   57613 out.go:179] * [functional-565315] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:03:43.472992   57613 notify.go:221] Checking for updates...
	I1121 14:03:43.472998   57613 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:03:43.474433   57613 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:03:43.475820   57613 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:03:43.477509   57613 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:03:43.479266   57613 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:03:43.480816   57613 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:03:43.484996   57613 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:03:43.485532   57613 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:03:43.511180   57613 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:03:43.511306   57613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:03:43.572371   57613 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-21 14:03:43.561483581 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:03:43.572510   57613 docker.go:319] overlay module found
	I1121 14:03:43.574693   57613 out.go:179] * Using the docker driver based on existing profile
	I1121 14:03:43.576182   57613 start.go:309] selected driver: docker
	I1121 14:03:43.576201   57613 start.go:930] validating driver "docker" against &{Name:functional-565315 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-565315 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:03:43.576292   57613 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:03:43.578509   57613 out.go:203] 
	W1121 14:03:43.580123   57613 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1121 14:03:43.581526   57613 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-565315 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-565315 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-565315 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (198.460006ms)

                                                
                                                
-- stdout --
	* [functional-565315] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:03:43.923351   57976 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:03:43.923482   57976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:03:43.923493   57976 out.go:374] Setting ErrFile to fd 2...
	I1121 14:03:43.923499   57976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:03:43.923927   57976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:03:43.924381   57976 out.go:368] Setting JSON to false
	I1121 14:03:43.925594   57976 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2766,"bootTime":1763731058,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:03:43.925708   57976 start.go:143] virtualization: kvm guest
	I1121 14:03:43.927912   57976 out.go:179] * [functional-565315] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1121 14:03:43.929755   57976 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:03:43.929762   57976 notify.go:221] Checking for updates...
	I1121 14:03:43.932536   57976 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:03:43.934814   57976 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:03:43.936566   57976 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:03:43.938226   57976 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:03:43.939726   57976 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:03:43.941655   57976 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:03:43.942357   57976 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:03:43.969432   57976 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:03:43.969523   57976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:03:44.040793   57976 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-21 14:03:44.028659237 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:03:44.040930   57976 docker.go:319] overlay module found
	I1121 14:03:44.042799   57976 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1121 14:03:44.044254   57976 start.go:309] selected driver: docker
	I1121 14:03:44.044278   57976 start.go:930] validating driver "docker" against &{Name:functional-565315 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-565315 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:03:44.044396   57976 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:03:44.046655   57976 out.go:203] 
	W1121 14:03:44.047989   57976 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1121 14:03:44.049356   57976 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-565315 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-565315 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6xqlg" [4e2dc549-0600-43cf-a875-d41ff285207d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-6xqlg" [4e2dc549-0600-43cf-a875-d41ff285207d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003470635s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31367
functional_test.go:1680: http://192.168.49.2:31367: success! body:
Request served by hello-node-connect-7d85dfc575-6xqlg

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31367
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [321b8ba7-a693-4761-9b73-dee56c6934bc] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004206945s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-565315 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-565315 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-565315 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-565315 apply -f testdata/storage-provisioner/pod.yaml
I1121 14:03:34.471302   14523 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [20129764-683b-4944-8a14-afc89752c897] Pending
helpers_test.go:352: "sp-pod" [20129764-683b-4944-8a14-afc89752c897] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [20129764-683b-4944-8a14-afc89752c897] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004599653s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-565315 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-565315 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-565315 delete -f testdata/storage-provisioner/pod.yaml: (1.580302795s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-565315 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [423528b3-4dc4-4051-8b02-92c9fbc8eb5f] Pending
helpers_test.go:352: "sp-pod" [423528b3-4dc4-4051-8b02-92c9fbc8eb5f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [423528b3-4dc4-4051-8b02-92c9fbc8eb5f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003355516s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-565315 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh -n functional-565315 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cp functional-565315:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd679312595/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh -n functional-565315 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh -n functional-565315 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-565315 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-tdl4r" [33189101-1cd8-4be9-9695-55bd342e441e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-tdl4r" [33189101-1cd8-4be9-9695-55bd342e441e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004052468s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-565315 exec mysql-5bb876957f-tdl4r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-565315 exec mysql-5bb876957f-tdl4r -- mysql -ppassword -e "show databases;": exit status 1 (119.726698ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 14:04:11.783909   14523 retry.go:31] will retry after 1.218911135s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-565315 exec mysql-5bb876957f-tdl4r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-565315 exec mysql-5bb876957f-tdl4r -- mysql -ppassword -e "show databases;": exit status 1 (111.90323ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 14:04:13.115077   14523 retry.go:31] will retry after 1.11524405s: exit status 1
E1121 14:04:13.419676   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-565315 exec mysql-5bb876957f-tdl4r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-565315 exec mysql-5bb876957f-tdl4r -- mysql -ppassword -e "show databases;": exit status 1 (115.102662ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 14:04:14.346102   14523 retry.go:31] will retry after 2.79170306s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-565315 exec mysql-5bb876957f-tdl4r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14523/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo cat /etc/test/nested/copy/14523/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14523.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo cat /etc/ssl/certs/14523.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14523.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo cat /usr/share/ca-certificates/14523.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145232.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo cat /etc/ssl/certs/145232.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145232.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo cat /usr/share/ca-certificates/145232.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-565315 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 ssh "sudo systemctl is-active docker": exit status 1 (315.395423ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 ssh "sudo systemctl is-active crio": exit status 1 (292.781869ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-565315 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-565315 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-26q2q" [8477cb21-6900-407a-a691-a7c1f499bd7c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-26q2q" [8477cb21-6900-407a-a691-a7c1f499bd7c] Running
E1121 14:03:32.442511   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:32.448993   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:32.460525   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:32.481992   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:32.523397   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:32.605637   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:32.767741   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:33.089534   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:33.731476   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003474536s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-565315 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-565315 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-565315 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-565315 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 53788: os: process already finished
helpers_test.go:525: unable to kill pid 53471: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-565315 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-565315 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e3e155d4-1edd-4fdc-9913-e30d7dc8c62c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [e3e155d4-1edd-4fdc-9913-e30d7dc8c62c] Running
E1121 14:03:35.013218   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003550041s
I1121 14:03:40.677282   14523 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 service list -o json
E1121 14:03:37.574535   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1504: Took "528.928069ms" to run "out/minikube-linux-amd64 -p functional-565315 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31160
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31160
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-565315 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-565315
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-565315
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-565315 image ls --format short --alsologtostderr:
I1121 14:03:55.013562   61949 out.go:360] Setting OutFile to fd 1 ...
I1121 14:03:55.013822   61949 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.013831   61949 out.go:374] Setting ErrFile to fd 2...
I1121 14:03:55.013835   61949 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.014057   61949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
I1121 14:03:55.014706   61949 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.014807   61949 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.015165   61949 cli_runner.go:164] Run: docker container inspect functional-565315 --format={{.State.Status}}
I1121 14:03:55.034823   61949 ssh_runner.go:195] Run: systemctl --version
I1121 14:03:55.034867   61949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-565315
I1121 14:03:55.055512   61949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/functional-565315/id_rsa Username:docker}
I1121 14:03:55.151568   61949 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-565315 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kicbase/echo-server               │ functional-565315  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-565315  │ sha256:edf661 │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-565315 image ls --format table --alsologtostderr:
I1121 14:03:55.535314   62266 out.go:360] Setting OutFile to fd 1 ...
I1121 14:03:55.535433   62266 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.535439   62266 out.go:374] Setting ErrFile to fd 2...
I1121 14:03:55.535445   62266 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.535783   62266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
I1121 14:03:55.536381   62266 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.536515   62266 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.536914   62266 cli_runner.go:164] Run: docker container inspect functional-565315 --format={{.State.Status}}
I1121 14:03:55.557041   62266 ssh_runner.go:195] Run: systemctl --version
I1121 14:03:55.557093   62266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-565315
I1121 14:03:55.577075   62266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/functional-565315/id_rsa Username:docker}
I1121 14:03:55.672652   62266 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-565315 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-565315"],"size":"2372971"},{"id":"sha256:edf661296a499e09444e24402e2c01d9add484687fe62e258e5e0c634596d74f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-56531
5"],"size":"991"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registr
y.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9f
a4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-565315 image ls --format json --alsologtostderr:
I1121 14:03:55.297281   62117 out.go:360] Setting OutFile to fd 1 ...
I1121 14:03:55.297633   62117 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.297647   62117 out.go:374] Setting ErrFile to fd 2...
I1121 14:03:55.297653   62117 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.297932   62117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
I1121 14:03:55.299667   62117 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.299848   62117 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.300400   62117 cli_runner.go:164] Run: docker container inspect functional-565315 --format={{.State.Status}}
I1121 14:03:55.325819   62117 ssh_runner.go:195] Run: systemctl --version
I1121 14:03:55.325868   62117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-565315
I1121 14:03:55.347066   62117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/functional-565315/id_rsa Username:docker}
I1121 14:03:55.442341   62117 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-565315 image ls --format yaml --alsologtostderr:
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-565315
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:edf661296a499e09444e24402e2c01d9add484687fe62e258e5e0c634596d74f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-565315
size: "991"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-565315 image ls --format yaml --alsologtostderr:
I1121 14:03:55.057216   61997 out.go:360] Setting OutFile to fd 1 ...
I1121 14:03:55.057609   61997 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.057623   61997 out.go:374] Setting ErrFile to fd 2...
I1121 14:03:55.057631   61997 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.058161   61997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
I1121 14:03:55.059226   61997 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.059326   61997 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.059737   61997 cli_runner.go:164] Run: docker container inspect functional-565315 --format={{.State.Status}}
I1121 14:03:55.078918   61997 ssh_runner.go:195] Run: systemctl --version
I1121 14:03:55.078967   61997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-565315
I1121 14:03:55.098008   61997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/functional-565315/id_rsa Username:docker}
I1121 14:03:55.195718   61997 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 ssh pgrep buildkitd: exit status 1 (300.592534ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image build -t localhost/my-image:functional-565315 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-565315 image build -t localhost/my-image:functional-565315 testdata/build --alsologtostderr: (4.41174811s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-565315 image build -t localhost/my-image:functional-565315 testdata/build --alsologtostderr:
I1121 14:03:55.543685   62273 out.go:360] Setting OutFile to fd 1 ...
I1121 14:03:55.543989   62273 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.544004   62273 out.go:374] Setting ErrFile to fd 2...
I1121 14:03:55.544007   62273 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:03:55.544212   62273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
I1121 14:03:55.544773   62273 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.545376   62273 config.go:182] Loaded profile config "functional-565315": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:03:55.545789   62273 cli_runner.go:164] Run: docker container inspect functional-565315 --format={{.State.Status}}
I1121 14:03:55.566661   62273 ssh_runner.go:195] Run: systemctl --version
I1121 14:03:55.566732   62273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-565315
I1121 14:03:55.585742   62273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/functional-565315/id_rsa Username:docker}
I1121 14:03:55.680810   62273 build_images.go:162] Building image from path: /tmp/build.3143856531.tar
I1121 14:03:55.680899   62273 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1121 14:03:55.691320   62273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3143856531.tar
I1121 14:03:55.695886   62273 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3143856531.tar: stat -c "%s %y" /var/lib/minikube/build/build.3143856531.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3143856531.tar': No such file or directory
I1121 14:03:55.695930   62273 ssh_runner.go:362] scp /tmp/build.3143856531.tar --> /var/lib/minikube/build/build.3143856531.tar (3072 bytes)
I1121 14:03:55.717209   62273 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3143856531
I1121 14:03:55.725476   62273 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3143856531 -xf /var/lib/minikube/build/build.3143856531.tar
I1121 14:03:55.733729   62273 containerd.go:394] Building image: /var/lib/minikube/build/build.3143856531
I1121 14:03:55.733807   62273 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3143856531 --local dockerfile=/var/lib/minikube/build/build.3143856531 --output type=image,name=localhost/my-image:functional-565315
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:013d3ab061faff759c833491d2301dc7d5f05178f1aae155ea3897ac5d31755c done
#8 exporting config sha256:817bf7ea1621537eb6c9eb0a2392a1d046b0e9c3b0c69902b6ade830e96cdba5 done
#8 naming to localhost/my-image:functional-565315 done
#8 DONE 0.1s
I1121 14:03:59.868804   62273 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3143856531 --local dockerfile=/var/lib/minikube/build/build.3143856531 --output type=image,name=localhost/my-image:functional-565315: (4.134949137s)
I1121 14:03:59.868889   62273 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3143856531
I1121 14:03:59.879658   62273 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3143856531.tar
I1121 14:03:59.888243   62273 build_images.go:218] Built localhost/my-image:functional-565315 from /tmp/build.3143856531.tar
I1121 14:03:59.888272   62273 build_images.go:134] succeeded building to: functional-565315
I1121 14:03:59.888276   62273 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.909609142s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-565315
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-565315 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.105.22 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-565315 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image load --daemon kicbase/echo-server:functional-565315 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "443.858171ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.288702ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdany-port3224219022/001:/mount-9p --alsologtostderr -v=1]
E1121 14:03:42.696218   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:107: wrote "test-1763733822696646389" to /tmp/TestFunctionalparallelMountCmdany-port3224219022/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763733822696646389" to /tmp/TestFunctionalparallelMountCmdany-port3224219022/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763733822696646389" to /tmp/TestFunctionalparallelMountCmdany-port3224219022/001/test-1763733822696646389
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.150443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:03:43.040151   14523 retry.go:31] will retry after 634.533749ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 21 14:03 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 21 14:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 21 14:03 test-1763733822696646389
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh cat /mount-9p/test-1763733822696646389
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-565315 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [446892b0-8a2e-4c02-b99b-20506e76ccc9] Pending
helpers_test.go:352: "busybox-mount" [446892b0-8a2e-4c02-b99b-20506e76ccc9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [446892b0-8a2e-4c02-b99b-20506e76ccc9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [446892b0-8a2e-4c02-b99b-20506e76ccc9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004225443s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-565315 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdany-port3224219022/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "371.027853ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.04528ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image load --daemon kicbase/echo-server:functional-565315 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-565315
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image load --daemon kicbase/echo-server:functional-565315 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-565315 image load --daemon kicbase/echo-server:functional-565315 --alsologtostderr: (1.365770355s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image save kicbase/echo-server:functional-565315 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image rm kicbase/echo-server:functional-565315 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls
I1121 14:03:47.301152   14523 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-565315
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 image save --daemon kicbase/echo-server:functional-565315 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-565315
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdspecific-port3246448839/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.45371ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:03:51.192251   14523 retry.go:31] will retry after 433.571105ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdspecific-port3246448839/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 ssh "sudo umount -f /mount-9p": exit status 1 (268.392398ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-565315 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdspecific-port3246448839/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1568729198/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1568729198/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1568729198/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T" /mount1
E1121 14:03:52.937455   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T" /mount1: exit status 1 (345.744086ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:03:53.064220   14523 retry.go:31] will retry after 559.52423ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T" /mount2
2025/11/21 14:03:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-565315 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-565315 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1568729198/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1568729198/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-565315 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1568729198/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-565315
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-565315
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-565315
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1121 14:04:54.381245   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m54.011313893s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (114.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- rollout status deployment/busybox
E1121 14:06:16.302647   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 kubectl -- rollout status deployment/busybox: (3.334679601s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-7rlz7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-d9gtz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-rjtkq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-7rlz7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-d9gtz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-rjtkq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-7rlz7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-d9gtz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-rjtkq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-7rlz7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-7rlz7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-d9gtz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-d9gtz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-rjtkq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 kubectl -- exec busybox-7b57f96db7-rjtkq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 node add --alsologtostderr -v 5: (23.340664533s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-339216 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp testdata/cp-test.txt ha-339216:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3752681461/001/cp-test_ha-339216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216:/home/docker/cp-test.txt ha-339216-m02:/home/docker/cp-test_ha-339216_ha-339216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m02 "sudo cat /home/docker/cp-test_ha-339216_ha-339216-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216:/home/docker/cp-test.txt ha-339216-m03:/home/docker/cp-test_ha-339216_ha-339216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m03 "sudo cat /home/docker/cp-test_ha-339216_ha-339216-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216:/home/docker/cp-test.txt ha-339216-m04:/home/docker/cp-test_ha-339216_ha-339216-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m04 "sudo cat /home/docker/cp-test_ha-339216_ha-339216-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp testdata/cp-test.txt ha-339216-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3752681461/001/cp-test_ha-339216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m02:/home/docker/cp-test.txt ha-339216:/home/docker/cp-test_ha-339216-m02_ha-339216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216 "sudo cat /home/docker/cp-test_ha-339216-m02_ha-339216.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m02:/home/docker/cp-test.txt ha-339216-m03:/home/docker/cp-test_ha-339216-m02_ha-339216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m03 "sudo cat /home/docker/cp-test_ha-339216-m02_ha-339216-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m02:/home/docker/cp-test.txt ha-339216-m04:/home/docker/cp-test_ha-339216-m02_ha-339216-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m04 "sudo cat /home/docker/cp-test_ha-339216-m02_ha-339216-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp testdata/cp-test.txt ha-339216-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3752681461/001/cp-test_ha-339216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m03:/home/docker/cp-test.txt ha-339216:/home/docker/cp-test_ha-339216-m03_ha-339216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216 "sudo cat /home/docker/cp-test_ha-339216-m03_ha-339216.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m03:/home/docker/cp-test.txt ha-339216-m02:/home/docker/cp-test_ha-339216-m03_ha-339216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m02 "sudo cat /home/docker/cp-test_ha-339216-m03_ha-339216-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m03:/home/docker/cp-test.txt ha-339216-m04:/home/docker/cp-test_ha-339216-m03_ha-339216-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m04 "sudo cat /home/docker/cp-test_ha-339216-m03_ha-339216-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp testdata/cp-test.txt ha-339216-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3752681461/001/cp-test_ha-339216-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m04:/home/docker/cp-test.txt ha-339216:/home/docker/cp-test_ha-339216-m04_ha-339216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216 "sudo cat /home/docker/cp-test_ha-339216-m04_ha-339216.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m04:/home/docker/cp-test.txt ha-339216-m02:/home/docker/cp-test_ha-339216-m04_ha-339216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m02 "sudo cat /home/docker/cp-test_ha-339216-m04_ha-339216-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 cp ha-339216-m04:/home/docker/cp-test.txt ha-339216-m03:/home/docker/cp-test_ha-339216-m04_ha-339216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 ssh -n ha-339216-m03 "sudo cat /home/docker/cp-test_ha-339216-m04_ha-339216-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 node stop m02 --alsologtostderr -v 5: (12.038142544s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5: exit status 7 (714.187671ms)

                                                
                                                
-- stdout --
	ha-339216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-339216-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-339216-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-339216-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:07:16.853344   83679 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:07:16.853629   83679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:07:16.853639   83679 out.go:374] Setting ErrFile to fd 2...
	I1121 14:07:16.853643   83679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:07:16.853862   83679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:07:16.854028   83679 out.go:368] Setting JSON to false
	I1121 14:07:16.854057   83679 mustload.go:66] Loading cluster: ha-339216
	I1121 14:07:16.854189   83679 notify.go:221] Checking for updates...
	I1121 14:07:16.854425   83679 config.go:182] Loaded profile config "ha-339216": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:07:16.854437   83679 status.go:174] checking status of ha-339216 ...
	I1121 14:07:16.855014   83679 cli_runner.go:164] Run: docker container inspect ha-339216 --format={{.State.Status}}
	I1121 14:07:16.877831   83679 status.go:371] ha-339216 host status = "Running" (err=<nil>)
	I1121 14:07:16.877871   83679 host.go:66] Checking if "ha-339216" exists ...
	I1121 14:07:16.878282   83679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-339216
	I1121 14:07:16.898293   83679 host.go:66] Checking if "ha-339216" exists ...
	I1121 14:07:16.898729   83679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:07:16.898787   83679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-339216
	I1121 14:07:16.918485   83679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/ha-339216/id_rsa Username:docker}
	I1121 14:07:17.013428   83679 ssh_runner.go:195] Run: systemctl --version
	I1121 14:07:17.020114   83679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:07:17.033258   83679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:07:17.093825   83679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:07:17.083850142 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:07:17.094447   83679 kubeconfig.go:125] found "ha-339216" server: "https://192.168.49.254:8443"
	I1121 14:07:17.094494   83679 api_server.go:166] Checking apiserver status ...
	I1121 14:07:17.094535   83679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:07:17.108137   83679 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1345/cgroup
	W1121 14:07:17.117230   83679 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1345/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:07:17.117285   83679 ssh_runner.go:195] Run: ls
	I1121 14:07:17.121312   83679 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 14:07:17.125529   83679 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 14:07:17.125563   83679 status.go:463] ha-339216 apiserver status = Running (err=<nil>)
	I1121 14:07:17.125576   83679 status.go:176] ha-339216 status: &{Name:ha-339216 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:07:17.125595   83679 status.go:174] checking status of ha-339216-m02 ...
	I1121 14:07:17.125824   83679 cli_runner.go:164] Run: docker container inspect ha-339216-m02 --format={{.State.Status}}
	I1121 14:07:17.145529   83679 status.go:371] ha-339216-m02 host status = "Stopped" (err=<nil>)
	I1121 14:07:17.145570   83679 status.go:384] host is not running, skipping remaining checks
	I1121 14:07:17.145579   83679 status.go:176] ha-339216-m02 status: &{Name:ha-339216-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:07:17.145603   83679 status.go:174] checking status of ha-339216-m03 ...
	I1121 14:07:17.145884   83679 cli_runner.go:164] Run: docker container inspect ha-339216-m03 --format={{.State.Status}}
	I1121 14:07:17.165127   83679 status.go:371] ha-339216-m03 host status = "Running" (err=<nil>)
	I1121 14:07:17.165155   83679 host.go:66] Checking if "ha-339216-m03" exists ...
	I1121 14:07:17.165435   83679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-339216-m03
	I1121 14:07:17.184130   83679 host.go:66] Checking if "ha-339216-m03" exists ...
	I1121 14:07:17.184422   83679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:07:17.184459   83679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-339216-m03
	I1121 14:07:17.203075   83679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/ha-339216-m03/id_rsa Username:docker}
	I1121 14:07:17.297104   83679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:07:17.311630   83679 kubeconfig.go:125] found "ha-339216" server: "https://192.168.49.254:8443"
	I1121 14:07:17.311658   83679 api_server.go:166] Checking apiserver status ...
	I1121 14:07:17.311694   83679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:07:17.324470   83679 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1288/cgroup
	W1121 14:07:17.333608   83679 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1288/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:07:17.333659   83679 ssh_runner.go:195] Run: ls
	I1121 14:07:17.337828   83679 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 14:07:17.341971   83679 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 14:07:17.342005   83679 status.go:463] ha-339216-m03 apiserver status = Running (err=<nil>)
	I1121 14:07:17.342015   83679 status.go:176] ha-339216-m03 status: &{Name:ha-339216-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:07:17.342037   83679 status.go:174] checking status of ha-339216-m04 ...
	I1121 14:07:17.342306   83679 cli_runner.go:164] Run: docker container inspect ha-339216-m04 --format={{.State.Status}}
	I1121 14:07:17.361363   83679 status.go:371] ha-339216-m04 host status = "Running" (err=<nil>)
	I1121 14:07:17.361385   83679 host.go:66] Checking if "ha-339216-m04" exists ...
	I1121 14:07:17.361732   83679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-339216-m04
	I1121 14:07:17.380002   83679 host.go:66] Checking if "ha-339216-m04" exists ...
	I1121 14:07:17.380259   83679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:07:17.380305   83679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-339216-m04
	I1121 14:07:17.399458   83679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/ha-339216-m04/id_rsa Username:docker}
	I1121 14:07:17.492844   83679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:07:17.505492   83679 status.go:176] ha-339216-m04 status: &{Name:ha-339216-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 node start m02 --alsologtostderr -v 5: (8.434921315s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 stop --alsologtostderr -v 5: (37.357604775s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 start --wait true --alsologtostderr -v 5
E1121 14:08:28.608409   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:28.615366   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:28.627045   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:28.648796   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:28.690601   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:28.772068   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:28.934132   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:29.255740   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:29.898047   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:31.180193   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:32.441789   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:33.742274   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:38.864613   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:49.106577   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:09:00.143960   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 start --wait true --alsologtostderr -v 5: (58.675012184s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 node delete m03 --alsologtostderr -v 5
E1121 14:09:09.588796   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 node delete m03 --alsologtostderr -v 5: (8.680979418s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 stop --alsologtostderr -v 5
E1121 14:09:50.552039   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 stop --alsologtostderr -v 5: (36.092644472s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5: exit status 7 (121.062379ms)

                                                
                                                
-- stdout --
	ha-339216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-339216-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-339216-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:09:51.099682   99862 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:09:51.099805   99862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:09:51.099818   99862 out.go:374] Setting ErrFile to fd 2...
	I1121 14:09:51.099824   99862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:09:51.100071   99862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:09:51.100251   99862 out.go:368] Setting JSON to false
	I1121 14:09:51.100286   99862 mustload.go:66] Loading cluster: ha-339216
	I1121 14:09:51.100429   99862 notify.go:221] Checking for updates...
	I1121 14:09:51.100826   99862 config.go:182] Loaded profile config "ha-339216": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:09:51.100849   99862 status.go:174] checking status of ha-339216 ...
	I1121 14:09:51.101401   99862 cli_runner.go:164] Run: docker container inspect ha-339216 --format={{.State.Status}}
	I1121 14:09:51.121895   99862 status.go:371] ha-339216 host status = "Stopped" (err=<nil>)
	I1121 14:09:51.121917   99862 status.go:384] host is not running, skipping remaining checks
	I1121 14:09:51.121923   99862 status.go:176] ha-339216 status: &{Name:ha-339216 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:09:51.121963   99862 status.go:174] checking status of ha-339216-m02 ...
	I1121 14:09:51.122218   99862 cli_runner.go:164] Run: docker container inspect ha-339216-m02 --format={{.State.Status}}
	I1121 14:09:51.140438   99862 status.go:371] ha-339216-m02 host status = "Stopped" (err=<nil>)
	I1121 14:09:51.140488   99862 status.go:384] host is not running, skipping remaining checks
	I1121 14:09:51.140501   99862 status.go:176] ha-339216-m02 status: &{Name:ha-339216-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:09:51.140551   99862 status.go:174] checking status of ha-339216-m04 ...
	I1121 14:09:51.140857   99862 cli_runner.go:164] Run: docker container inspect ha-339216-m04 --format={{.State.Status}}
	I1121 14:09:51.159564   99862 status.go:371] ha-339216-m04 host status = "Stopped" (err=<nil>)
	I1121 14:09:51.159588   99862 status.go:384] host is not running, skipping remaining checks
	I1121 14:09:51.159595   99862 status.go:176] ha-339216-m04 status: &{Name:ha-339216-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (54.405363076s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 node add --control-plane --alsologtostderr -v 5
E1121 14:11:12.474001   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-339216 node add --control-plane --alsologtostderr -v 5: (1m15.049225309s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-339216 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-779819 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-779819 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (38.830576349s)
--- PASS: TestJSONOutput/start/Command (38.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-779819 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-779819 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-779819 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-779819 --output=json --user=testUser: (5.861871211s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-640397 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-640397 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.274146ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"52279405-eb86-4d97-9cd8-4e5f66a04c3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-640397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6331bea9-22e0-454a-bbdd-87027439e13d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21847"}}
	{"specversion":"1.0","id":"4541c80e-e5e9-418e-aa0f-738fe9fb9b85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7b9cfd2a-1b94-408c-a02b-c710101ab620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig"}}
	{"specversion":"1.0","id":"26b14377-7fa2-49c8-8097-da1d833c8e27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube"}}
	{"specversion":"1.0","id":"78814d66-6f89-48a5-87ae-5800fdf3d763","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"546bdff1-762b-49a3-ba30-0b35adc755f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9ab09277-ca68-42f5-b603-b7e11c3a93d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-640397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-640397
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-634197 --network=
E1121 14:13:28.613817   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:13:32.442354   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-634197 --network=: (31.626273403s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-634197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-634197
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-634197: (2.14144808s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-391330 --network=bridge
E1121 14:13:56.316018   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-391330 --network=bridge: (21.367317902s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-391330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-391330
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-391330: (2.043409923s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.43s)

                                                
                                    
x
+
TestKicExistingNetwork (26.79s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1121 14:13:59.828760   14523 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1121 14:13:59.847292   14523 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1121 14:13:59.847390   14523 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1121 14:13:59.847425   14523 cli_runner.go:164] Run: docker network inspect existing-network
W1121 14:13:59.865199   14523 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1121 14:13:59.865226   14523 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1121 14:13:59.865257   14523 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1121 14:13:59.865356   14523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1121 14:13:59.884326   14523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66cfc06dc768 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:44:28:22:82:94} reservation:<nil>}
I1121 14:13:59.884694   14523 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ab6fe0}
I1121 14:13:59.884726   14523 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1121 14:13:59.884784   14523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1121 14:13:59.936292   14523 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-317349 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-317349 --network=existing-network: (24.616117979s)
helpers_test.go:175: Cleaning up "existing-network-317349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-317349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-317349: (2.032443303s)
I1121 14:14:26.603770   14523 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.79s)

                                                
                                    
x
+
TestKicCustomSubnet (24.11s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-999892 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-999892 --subnet=192.168.60.0/24: (21.928956104s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-999892 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-999892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-999892
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-999892: (2.159835706s)
--- PASS: TestKicCustomSubnet (24.11s)

                                                
                                    
x
+
TestKicStaticIP (27.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-947151 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-947151 --static-ip=192.168.200.200: (25.28713811s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-947151 ip
helpers_test.go:175: Cleaning up "static-ip-947151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-947151
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-947151: (2.175482908s)
--- PASS: TestKicStaticIP (27.61s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-158261 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-158261 --driver=docker  --container-runtime=containerd: (21.121663681s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-160929 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-160929 --driver=docker  --container-runtime=containerd: (22.901682755s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-158261
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-160929
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-160929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-160929
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-160929: (1.956937853s)
helpers_test.go:175: Cleaning up "first-158261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-158261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-158261: (2.356396508s)
--- PASS: TestMinikubeProfile (49.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-535895 --memory=3072 --mount-string /tmp/TestMountStartserial3576890416/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-535895 --memory=3072 --mount-string /tmp/TestMountStartserial3576890416/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.658931225s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-535895 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-564140 --memory=3072 --mount-string /tmp/TestMountStartserial3576890416/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-564140 --memory=3072 --mount-string /tmp/TestMountStartserial3576890416/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.241630745s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-564140 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-535895 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-535895 --alsologtostderr -v=5: (1.691588373s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-564140 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-564140
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-564140: (1.266276725s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-564140
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-564140: (6.602328369s)
--- PASS: TestMountStart/serial/RestartStopped (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-564140 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054240 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054240 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.199512315s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-054240 -- rollout status deployment/busybox: (3.61547981s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-rd6n4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-vnxg5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-rd6n4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-vnxg5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-rd6n4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-vnxg5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-rd6n4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-rd6n4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-vnxg5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054240 -- exec busybox-7b57f96db7-vnxg5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-054240 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-054240 -v=5 --alsologtostderr: (23.309610411s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-054240 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp testdata/cp-test.txt multinode-054240:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4032317460/001/cp-test_multinode-054240.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240:/home/docker/cp-test.txt multinode-054240-m02:/home/docker/cp-test_multinode-054240_multinode-054240-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m02 "sudo cat /home/docker/cp-test_multinode-054240_multinode-054240-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240:/home/docker/cp-test.txt multinode-054240-m03:/home/docker/cp-test_multinode-054240_multinode-054240-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m03 "sudo cat /home/docker/cp-test_multinode-054240_multinode-054240-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp testdata/cp-test.txt multinode-054240-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4032317460/001/cp-test_multinode-054240-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240-m02:/home/docker/cp-test.txt multinode-054240:/home/docker/cp-test_multinode-054240-m02_multinode-054240.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240 "sudo cat /home/docker/cp-test_multinode-054240-m02_multinode-054240.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240-m02:/home/docker/cp-test.txt multinode-054240-m03:/home/docker/cp-test_multinode-054240-m02_multinode-054240-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m03 "sudo cat /home/docker/cp-test_multinode-054240-m02_multinode-054240-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp testdata/cp-test.txt multinode-054240-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4032317460/001/cp-test_multinode-054240-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240-m03:/home/docker/cp-test.txt multinode-054240:/home/docker/cp-test_multinode-054240-m03_multinode-054240.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240 "sudo cat /home/docker/cp-test_multinode-054240-m03_multinode-054240.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 cp multinode-054240-m03:/home/docker/cp-test.txt multinode-054240-m02:/home/docker/cp-test_multinode-054240-m03_multinode-054240-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 ssh -n multinode-054240-m02 "sudo cat /home/docker/cp-test_multinode-054240-m03_multinode-054240-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-054240 node stop m03: (1.275125522s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054240 status: exit status 7 (508.476942ms)

                                                
                                                
-- stdout --
	multinode-054240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-054240-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-054240-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054240 status --alsologtostderr: exit status 7 (516.718718ms)

                                                
                                                
-- stdout --
	multinode-054240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-054240-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-054240-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:18:16.721130  162438 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:18:16.721380  162438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:18:16.721388  162438 out.go:374] Setting ErrFile to fd 2...
	I1121 14:18:16.721392  162438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:18:16.721591  162438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:18:16.721760  162438 out.go:368] Setting JSON to false
	I1121 14:18:16.721792  162438 mustload.go:66] Loading cluster: multinode-054240
	I1121 14:18:16.721878  162438 notify.go:221] Checking for updates...
	I1121 14:18:16.722241  162438 config.go:182] Loaded profile config "multinode-054240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:18:16.722259  162438 status.go:174] checking status of multinode-054240 ...
	I1121 14:18:16.722843  162438 cli_runner.go:164] Run: docker container inspect multinode-054240 --format={{.State.Status}}
	I1121 14:18:16.746330  162438 status.go:371] multinode-054240 host status = "Running" (err=<nil>)
	I1121 14:18:16.746351  162438 host.go:66] Checking if "multinode-054240" exists ...
	I1121 14:18:16.746619  162438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-054240
	I1121 14:18:16.766129  162438 host.go:66] Checking if "multinode-054240" exists ...
	I1121 14:18:16.766432  162438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:18:16.766482  162438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-054240
	I1121 14:18:16.786162  162438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/multinode-054240/id_rsa Username:docker}
	I1121 14:18:16.879174  162438 ssh_runner.go:195] Run: systemctl --version
	I1121 14:18:16.885607  162438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:18:16.898250  162438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:18:16.960483  162438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-21 14:18:16.95055176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:18:16.961034  162438 kubeconfig.go:125] found "multinode-054240" server: "https://192.168.67.2:8443"
	I1121 14:18:16.961063  162438 api_server.go:166] Checking apiserver status ...
	I1121 14:18:16.961095  162438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:18:16.973029  162438 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1281/cgroup
	W1121 14:18:16.981802  162438 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1281/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:18:16.981863  162438 ssh_runner.go:195] Run: ls
	I1121 14:18:16.985573  162438 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1121 14:18:16.989717  162438 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1121 14:18:16.989746  162438 status.go:463] multinode-054240 apiserver status = Running (err=<nil>)
	I1121 14:18:16.989758  162438 status.go:176] multinode-054240 status: &{Name:multinode-054240 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:18:16.989777  162438 status.go:174] checking status of multinode-054240-m02 ...
	I1121 14:18:16.990018  162438 cli_runner.go:164] Run: docker container inspect multinode-054240-m02 --format={{.State.Status}}
	I1121 14:18:17.008853  162438 status.go:371] multinode-054240-m02 host status = "Running" (err=<nil>)
	I1121 14:18:17.008878  162438 host.go:66] Checking if "multinode-054240-m02" exists ...
	I1121 14:18:17.009146  162438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-054240-m02
	I1121 14:18:17.029813  162438 host.go:66] Checking if "multinode-054240-m02" exists ...
	I1121 14:18:17.030107  162438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:18:17.030158  162438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-054240-m02
	I1121 14:18:17.049412  162438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/21847-11004/.minikube/machines/multinode-054240-m02/id_rsa Username:docker}
	I1121 14:18:17.142989  162438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:18:17.155693  162438 status.go:176] multinode-054240-m02 status: &{Name:multinode-054240-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:18:17.155725  162438 status.go:174] checking status of multinode-054240-m03 ...
	I1121 14:18:17.156022  162438 cli_runner.go:164] Run: docker container inspect multinode-054240-m03 --format={{.State.Status}}
	I1121 14:18:17.174968  162438 status.go:371] multinode-054240-m03 host status = "Stopped" (err=<nil>)
	I1121 14:18:17.174995  162438 status.go:384] host is not running, skipping remaining checks
	I1121 14:18:17.175001  162438 status.go:176] multinode-054240-m03 status: &{Name:multinode-054240-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-054240 node start m03 -v=5 --alsologtostderr: (6.301306942s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054240
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-054240
E1121 14:18:28.610407   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:18:32.444640   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-054240: (25.074979545s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054240 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054240 --wait=true -v=5 --alsologtostderr: (53.569741875s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054240
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-054240 node delete m03: (4.674267285s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 stop
E1121 14:19:55.507707   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-054240 stop: (23.931253496s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054240 status: exit status 7 (99.128937ms)

                                                
                                                
-- stdout --
	multinode-054240
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-054240-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054240 status --alsologtostderr: exit status 7 (98.408987ms)

                                                
                                                
-- stdout --
	multinode-054240
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-054240-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:20:12.333835  172176 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:20:12.333968  172176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:20:12.333977  172176 out.go:374] Setting ErrFile to fd 2...
	I1121 14:20:12.333983  172176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:20:12.334207  172176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:20:12.334412  172176 out.go:368] Setting JSON to false
	I1121 14:20:12.334452  172176 mustload.go:66] Loading cluster: multinode-054240
	I1121 14:20:12.334503  172176 notify.go:221] Checking for updates...
	I1121 14:20:12.334881  172176 config.go:182] Loaded profile config "multinode-054240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:20:12.334899  172176 status.go:174] checking status of multinode-054240 ...
	I1121 14:20:12.335335  172176 cli_runner.go:164] Run: docker container inspect multinode-054240 --format={{.State.Status}}
	I1121 14:20:12.354630  172176 status.go:371] multinode-054240 host status = "Stopped" (err=<nil>)
	I1121 14:20:12.354664  172176 status.go:384] host is not running, skipping remaining checks
	I1121 14:20:12.354674  172176 status.go:176] multinode-054240 status: &{Name:multinode-054240 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:20:12.354725  172176 status.go:174] checking status of multinode-054240-m02 ...
	I1121 14:20:12.355064  172176 cli_runner.go:164] Run: docker container inspect multinode-054240-m02 --format={{.State.Status}}
	I1121 14:20:12.373355  172176 status.go:371] multinode-054240-m02 host status = "Stopped" (err=<nil>)
	I1121 14:20:12.373380  172176 status.go:384] host is not running, skipping remaining checks
	I1121 14:20:12.373388  172176 status.go:176] multinode-054240-m02 status: &{Name:multinode-054240-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054240 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054240 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (44.199761581s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054240 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054240
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054240-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-054240-m02 --driver=docker  --container-runtime=containerd: exit status 14 (87.117651ms)

                                                
                                                
-- stdout --
	* [multinode-054240-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-054240-m02' is duplicated with machine name 'multinode-054240-m02' in profile 'multinode-054240'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054240-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054240-m03 --driver=docker  --container-runtime=containerd: (24.558118738s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-054240
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-054240: exit status 80 (301.638427ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-054240 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-054240-m03 already exists in multinode-054240-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-054240-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-054240-m03: (2.436343955s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.45s)

                                                
                                    
x
+
TestPreload (114.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-633025 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-633025 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (48.617754354s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-633025 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-633025 image pull gcr.io/k8s-minikube/busybox: (2.459016174s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-633025
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-633025: (6.731959431s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-633025 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-633025 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (53.53658079s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-633025 image list
helpers_test.go:175: Cleaning up "test-preload-633025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-633025
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-633025: (2.486939258s)
--- PASS: TestPreload (114.07s)

                                                
                                    
x
+
TestScheduledStopUnix (97.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-821545 --memory=3072 --driver=docker  --container-runtime=containerd
E1121 14:23:28.612717   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:23:32.442352   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-821545 --memory=3072 --driver=docker  --container-runtime=containerd: (21.769826127s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-821545 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:23:44.750633  190429 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:23:44.751117  190429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:23:44.751130  190429 out.go:374] Setting ErrFile to fd 2...
	I1121 14:23:44.751219  190429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:23:44.751753  190429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:23:44.752450  190429 out.go:368] Setting JSON to false
	I1121 14:23:44.752604  190429 mustload.go:66] Loading cluster: scheduled-stop-821545
	I1121 14:23:44.752961  190429 config.go:182] Loaded profile config "scheduled-stop-821545": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:23:44.753036  190429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/config.json ...
	I1121 14:23:44.753222  190429 mustload.go:66] Loading cluster: scheduled-stop-821545
	I1121 14:23:44.753348  190429 config.go:182] Loaded profile config "scheduled-stop-821545": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-821545 -n scheduled-stop-821545
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-821545 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:23:45.152189  190580 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:23:45.152284  190580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:23:45.152289  190580 out.go:374] Setting ErrFile to fd 2...
	I1121 14:23:45.152293  190580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:23:45.152968  190580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:23:45.153373  190580 out.go:368] Setting JSON to false
	I1121 14:23:45.153714  190580 daemonize_unix.go:73] killing process 190464 as it is an old scheduled stop
	I1121 14:23:45.153895  190580 mustload.go:66] Loading cluster: scheduled-stop-821545
	I1121 14:23:45.154312  190580 config.go:182] Loaded profile config "scheduled-stop-821545": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:23:45.154382  190580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/config.json ...
	I1121 14:23:45.154640  190580 mustload.go:66] Loading cluster: scheduled-stop-821545
	I1121 14:23:45.154774  190580 config.go:182] Loaded profile config "scheduled-stop-821545": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1121 14:23:45.160632   14523 retry.go:31] will retry after 86.612µs: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.161784   14523 retry.go:31] will retry after 188.503µs: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.162937   14523 retry.go:31] will retry after 168.758µs: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.164047   14523 retry.go:31] will retry after 261.198µs: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.165190   14523 retry.go:31] will retry after 650.657µs: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.166339   14523 retry.go:31] will retry after 618.522µs: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.167473   14523 retry.go:31] will retry after 742.18µs: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.168615   14523 retry.go:31] will retry after 1.897947ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.170857   14523 retry.go:31] will retry after 3.401388ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.175101   14523 retry.go:31] will retry after 5.440881ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.181345   14523 retry.go:31] will retry after 7.768854ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.189634   14523 retry.go:31] will retry after 11.253118ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.201909   14523 retry.go:31] will retry after 15.036158ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.217226   14523 retry.go:31] will retry after 17.921329ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.235554   14523 retry.go:31] will retry after 20.672718ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
I1121 14:23:45.256859   14523 retry.go:31] will retry after 37.342131ms: open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-821545 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-821545 -n scheduled-stop-821545
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-821545
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-821545 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:24:11.054068  191468 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:24:11.054311  191468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:24:11.054321  191468 out.go:374] Setting ErrFile to fd 2...
	I1121 14:24:11.054325  191468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:24:11.054518  191468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:24:11.054747  191468 out.go:368] Setting JSON to false
	I1121 14:24:11.054830  191468 mustload.go:66] Loading cluster: scheduled-stop-821545
	I1121 14:24:11.055146  191468 config.go:182] Loaded profile config "scheduled-stop-821545": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:24:11.055214  191468 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/scheduled-stop-821545/config.json ...
	I1121 14:24:11.055400  191468 mustload.go:66] Loading cluster: scheduled-stop-821545
	I1121 14:24:11.055499  191468 config.go:182] Loaded profile config "scheduled-stop-821545": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1121 14:24:51.681136   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-821545
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-821545: exit status 7 (81.157137ms)

                                                
                                                
-- stdout --
	scheduled-stop-821545
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-821545 -n scheduled-stop-821545
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-821545 -n scheduled-stop-821545: exit status 7 (83.853995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-821545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-821545
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-821545: (4.562723265s)
--- PASS: TestScheduledStopUnix (97.87s)

                                                
                                    
x
+
TestInsufficientStorage (9.96s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-247031 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-247031 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.450052969s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a33c92b3-2b62-48a4-acbc-445f59b2c0ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-247031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4cf99bd6-a3aa-4a12-bd10-6fedf66e3542","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21847"}}
	{"specversion":"1.0","id":"55afeb80-11dc-4bcf-a02f-21ae202785db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec22b6da-82a5-4c94-810b-102a284251c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig"}}
	{"specversion":"1.0","id":"08db9292-132f-4e62-a251-30095a8eac57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube"}}
	{"specversion":"1.0","id":"fa087c0b-db16-4a86-99e4-c67c7022ba08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"55abd2d3-0104-4ff3-8a0e-8726739a3396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca513b99-6180-4304-af43-f7aa66ad1f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2167ec64-6079-44a6-b5d1-5bffdbe8e4ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"03dcc509-0cf3-4e05-a683-ff5a17867f10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1820f3c4-4c82-4311-b5b4-6ebed4f31a3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a8b51db2-aa85-48aa-8d14-af3e2e29ca64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-247031\" primary control-plane node in \"insufficient-storage-247031\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"17bd76ed-011d-4602-9e64-1e2bf24d1733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d130008b-0677-460e-8964-9b18989ecb60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4f23254-d3fb-41cc-9a27-8baad2df298b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-247031 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-247031 --output=json --layout=cluster: exit status 7 (300.210841ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-247031","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-247031","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1121 14:25:08.529431  193720 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-247031" does not appear in /home/jenkins/minikube-integration/21847-11004/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-247031 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-247031 --output=json --layout=cluster: exit status 7 (296.497988ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-247031","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-247031","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1121 14:25:08.826799  193829 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-247031" does not appear in /home/jenkins/minikube-integration/21847-11004/kubeconfig
	E1121 14:25:08.837803  193829 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/insufficient-storage-247031/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-247031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-247031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-247031: (1.911187294s)
--- PASS: TestInsufficientStorage (9.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2769856816 start -p running-upgrade-687843 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2769856816 start -p running-upgrade-687843 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (21.334050752s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-687843 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-687843 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.258130672s)
helpers_test.go:175: Cleaning up "running-upgrade-687843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-687843
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-687843: (2.012034961s)
--- PASS: TestRunningBinaryUpgrade (47.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (328.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-797080 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-797080 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.380596526s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-797080
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-797080: (3.954058031s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-797080 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-797080 status --format={{.Host}}: exit status 7 (81.992628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-797080 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-797080 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.73686922s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-797080 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-797080 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-797080 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (86.347041ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-797080] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-797080
	    minikube start -p kubernetes-upgrade-797080 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7970802 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-797080 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-797080 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-797080 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (18.19764787s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-797080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-797080
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-797080: (2.717085063s)
--- PASS: TestKubernetesUpgrade (328.22s)

                                                
                                    
x
+
TestMissingContainerUpgrade (136.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1385547671 start -p missing-upgrade-117384 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1385547671 start -p missing-upgrade-117384 --memory=3072 --driver=docker  --container-runtime=containerd: (1m24.856316568s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-117384
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-117384: (1.557232094s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-117384
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-117384 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-117384 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.538235122s)
helpers_test.go:175: Cleaning up "missing-upgrade-117384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-117384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-117384: (2.066948641s)
--- PASS: TestMissingContainerUpgrade (136.71s)

                                                
                                    
x
+
TestPause/serial/Start (51.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-350484 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-350484 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (51.482142682s)
--- PASS: TestPause/serial/Start (51.48s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-350484 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-350484 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.151985748s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.17s)

                                                
                                    
x
+
TestPause/serial/Pause (1.12s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-350484 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-350484 --alsologtostderr -v=5: (1.122850992s)
--- PASS: TestPause/serial/Pause (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-350484 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-350484 --output=json --layout=cluster: exit status 2 (332.484896ms)

                                                
                                                
-- stdout --
	{"Name":"pause-350484","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-350484","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-350484 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-350484 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-350484 --alsologtostderr -v=5: (1.125682955s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-350484 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-350484 --alsologtostderr -v=5: (3.035040928s)
--- PASS: TestPause/serial/DeletePaused (3.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-350484
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-350484: exit status 1 (17.748662ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-350484: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3633893963 start -p stopped-upgrade-875051 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3633893963 start -p stopped-upgrade-875051 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (55.260795099s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3633893963 -p stopped-upgrade-875051 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3633893963 -p stopped-upgrade-875051 stop: (11.74860992s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-875051 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-875051 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.929985924s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (91.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-875051
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-875051: (1.185181983s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187733 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-187733 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (84.359087ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-187733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (21.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187733 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187733 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.625032789s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-187733 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (21.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187733 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1121 14:28:28.608519   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:28:32.442813   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187733 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (20.179020652s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-187733 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-187733 status -o json: exit status 2 (337.303406ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-187733","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-187733
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-187733: (2.164788964s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-459127 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-459127 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (177.235725ms)

                                                
                                                
-- stdout --
	* [false-459127] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:28:44.228526  236997 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:28:44.228715  236997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:28:44.228725  236997 out.go:374] Setting ErrFile to fd 2...
	I1121 14:28:44.228732  236997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:28:44.229007  236997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11004/.minikube/bin
	I1121 14:28:44.229522  236997 out.go:368] Setting JSON to false
	I1121 14:28:44.230792  236997 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4266,"bootTime":1763731058,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:28:44.230858  236997 start.go:143] virtualization: kvm guest
	I1121 14:28:44.232858  236997 out.go:179] * [false-459127] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:28:44.234912  236997 notify.go:221] Checking for updates...
	I1121 14:28:44.234948  236997 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:28:44.236253  236997 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:28:44.237771  236997 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11004/kubeconfig
	I1121 14:28:44.239105  236997 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11004/.minikube
	I1121 14:28:44.240307  236997 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:28:44.241725  236997 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:28:44.243615  236997 config.go:182] Loaded profile config "NoKubernetes-187733": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1121 14:28:44.243743  236997 config.go:182] Loaded profile config "cert-expiration-371956": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:28:44.243824  236997 config.go:182] Loaded profile config "kubernetes-upgrade-797080": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:28:44.243924  236997 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:28:44.270451  236997 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:28:44.270560  236997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:28:44.335583  236997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-21 14:28:44.323960888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:28:44.335687  236997 docker.go:319] overlay module found
	I1121 14:28:44.337602  236997 out.go:179] * Using the docker driver based on user configuration
	I1121 14:28:44.338870  236997 start.go:309] selected driver: docker
	I1121 14:28:44.338891  236997 start.go:930] validating driver "docker" against <nil>
	I1121 14:28:44.338904  236997 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:28:44.341018  236997 out.go:203] 
	W1121 14:28:44.342374  236997 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1121 14:28:44.343589  236997 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-459127 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-459127" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:28:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-187733
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:25:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-371956
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:26:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-797080
contexts:
- context:
cluster: NoKubernetes-187733
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:28:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-187733
name: NoKubernetes-187733
- context:
cluster: cert-expiration-371956
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:25:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-371956
name: cert-expiration-371956
- context:
cluster: kubernetes-upgrade-797080
user: kubernetes-upgrade-797080
name: kubernetes-upgrade-797080
current-context: ""
kind: Config
users:
- name: NoKubernetes-187733
user:
client-certificate: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/NoKubernetes-187733/client.crt
client-key: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/NoKubernetes-187733/client.key
- name: cert-expiration-371956
user:
client-certificate: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/cert-expiration-371956/client.crt
client-key: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/cert-expiration-371956/client.key
- name: kubernetes-upgrade-797080
user:
client-certificate: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/kubernetes-upgrade-797080/client.crt
client-key: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/kubernetes-upgrade-797080/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-459127

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459127"

                                                
                                                
----------------------- debugLogs end: false-459127 [took: 3.546004779s] --------------------------------
helpers_test.go:175: Cleaning up "false-459127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-459127
--- PASS: TestNetworkPlugins/group/false (3.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187733 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187733 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.978124679s)
--- PASS: TestNoKubernetes/serial/Start (7.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21847-11004/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-187733 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-187733 "sudo systemctl is-active --quiet service kubelet": exit status 1 (332.119737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-187733
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-187733: (2.308326699s)
--- PASS: TestNoKubernetes/serial/Stop (2.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187733 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187733 --driver=docker  --container-runtime=containerd: (7.229344581s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-187733 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-187733 "sudo systemctl is-active --quiet service kubelet": exit status 1 (357.932002ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (56.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (56.52082298s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (56.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.340769463s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.354380807s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-012258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-012258 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-012258 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-012258 --alsologtostderr -v=3: (12.163835415s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-376255 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-376255 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-376255 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-376255 --alsologtostderr -v=3: (12.131286199s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-921956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-921956 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-921956 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-921956 --alsologtostderr -v=3: (12.093362675s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012258 -n old-k8s-version-012258
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012258 -n old-k8s-version-012258: exit status 7 (89.256222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-012258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-012258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (49.188498943s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012258 -n old-k8s-version-012258
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255: exit status 7 (80.768402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-376255 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-376255 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.539468736s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-921956 -n no-preload-921956
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-921956 -n no-preload-921956: exit status 7 (92.581335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-921956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-921956 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.347202159s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-921956 -n no-preload-921956
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k9w7k" [f36708ea-72e6-4729-af83-1d2323c19c5d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004572178s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rzvcx" [1317f10e-c863-49c6-98ef-eb4bc9354fe3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004564903s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k9w7k" [f36708ea-72e6-4729-af83-1d2323c19c5d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00358609s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-376255 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rzvcx" [1317f10e-c863-49c6-98ef-eb4bc9354fe3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004341751s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-012258 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-163061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-163061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (31.908884894s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-376255 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-376255 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255: exit status 2 (357.678337ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255: exit status 2 (343.472223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-376255 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-376255 -n default-k8s-diff-port-376255
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-012258 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-012258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012258 -n old-k8s-version-012258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012258 -n old-k8s-version-012258: exit status 2 (387.959761ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012258 -n old-k8s-version-012258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012258 -n old-k8s-version-012258: exit status 2 (367.939626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-012258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-012258 --alsologtostderr -v=1: (1.003756873s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012258 -n old-k8s-version-012258
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012258 -n old-k8s-version-012258
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vt8fz" [48e38363-e957-483b-86c4-8cac885196dd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003903144s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-013140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-013140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.363281821s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vt8fz" [48e38363-e957-483b-86c4-8cac885196dd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003903117s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-921956 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (45.674518771s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-921956 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-921956 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-921956 -n no-preload-921956
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-921956 -n no-preload-921956: exit status 2 (404.852383ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-921956 -n no-preload-921956
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-921956 -n no-preload-921956: exit status 2 (348.097126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-921956 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-921956 -n no-preload-921956
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-921956 -n no-preload-921956
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (46.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (46.216579568s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (46.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-163061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-163061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.073307087s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-163061 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-163061 --alsologtostderr -v=3: (1.340749344s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-163061 -n newest-cni-163061
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-163061 -n newest-cni-163061: exit status 7 (100.647704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-163061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-163061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-163061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (11.671235806s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-163061 -n newest-cni-163061
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-163061 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-163061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-163061 -n newest-cni-163061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-163061 -n newest-cni-163061: exit status 2 (363.499737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-163061 -n newest-cni-163061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-163061 -n newest-cni-163061: exit status 2 (327.62734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-163061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-163061 -n newest-cni-163061
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-163061 -n newest-cni-163061
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (52.51650988s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-459127 "pgrep -a kubelet"
I1121 14:32:28.935439   14523 config.go:182] Loaded profile config "auto-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-459127 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tr9gg" [2ebc2b21-3c2d-467f-bb1d-92caadabed35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tr9gg" [2ebc2b21-3c2d-467f-bb1d-92caadabed35] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00418867s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-459127 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-lhhz5" [f6889d32-31b2-4515-89f0-994f01625116] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00468454s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-013140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-013140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-013140 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-013140 --alsologtostderr -v=3: (12.605048531s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-459127 "pgrep -a kubelet"
I1121 14:32:47.427470   14523 config.go:182] Loaded profile config "kindnet-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-459127 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bhrpk" [1f322761-6f21-4207-bdc6-0be3843a89e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bhrpk" [1f322761-6f21-4207-bdc6-0be3843a89e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004457782s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-013140 -n embed-certs-013140
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-013140 -n embed-certs-013140: exit status 7 (120.717659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-013140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-013140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-013140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.979250443s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-013140 -n embed-certs-013140
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-459127 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (56.038014938s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-cg4zs" [7c6e8b3a-f75b-4cd3-89d2-206b45863a7d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.079640214s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (60.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m0.325268096s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (60.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-459127 "pgrep -a kubelet"
I1121 14:33:23.622238   14523 config.go:182] Loaded profile config "calico-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-459127 replace --force -f testdata/netcat-deployment.yaml
I1121 14:33:24.362514   14523 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1121 14:33:24.515535   14523 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x5l9j" [f903f5f7-dea4-4522-95f4-eb548627feac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1121 14:33:28.608964   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/functional-565315/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-x5l9j" [f903f5f7-dea4-4522-95f4-eb548627feac] Running
E1121 14:33:32.442222   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/addons-520558/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003120916s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-459127 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-97fj6" [22142f37-a333-4631-862f-16f1364601a1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004055629s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-97fj6" [22142f37-a333-4631-862f-16f1364601a1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003857449s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-013140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (50.405173082s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-459127 "pgrep -a kubelet"
I1121 14:33:58.467302   14523 config.go:182] Loaded profile config "custom-flannel-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-459127 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c5r9s" [dc4ae834-9c24-433c-b8e3-7e72899e41a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c5r9s" [dc4ae834-9c24-433c-b8e3-7e72899e41a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.080736054s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-013140 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-013140 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-013140 --alsologtostderr -v=1: (1.118837394s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-013140 -n embed-certs-013140
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-013140 -n embed-certs-013140: exit status 2 (341.232799ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-013140 -n embed-certs-013140
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-013140 -n embed-certs-013140: exit status 2 (358.983489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-013140 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-013140 --alsologtostderr -v=1: (1.072461353s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-013140 -n embed-certs-013140
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-013140 -n embed-certs-013140
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-459127 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m5.322591317s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-459127 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-459127 "pgrep -a kubelet"
I1121 14:34:20.110045   14523 config.go:182] Loaded profile config "enable-default-cni-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-459127 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x2w5w" [0fe7cbc0-391b-441f-8229-92a13b5437d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x2w5w" [0fe7cbc0-391b-441f-8229-92a13b5437d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00396561s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-459127 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-psmf5" [3c26cbed-49a2-42ed-9d18-e5df76f3320a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005040274s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-459127 "pgrep -a kubelet"
I1121 14:34:53.042954   14523 config.go:182] Loaded profile config "flannel-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-459127 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nh76l" [7b0fd500-1222-461a-867c-2f0cc280235c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nh76l" [7b0fd500-1222-461a-867c-2f0cc280235c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004389525s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-459127 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-459127 "pgrep -a kubelet"
I1121 14:35:11.382430   14523 config.go:182] Loaded profile config "bridge-459127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-459127 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E1121 14:35:11.598427   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-p6fvk" [8f46441a-dead-423e-b004-24104e40f1ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1121 14:35:12.879754   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:13.350627   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-p6fvk" [8f46441a-dead-423e-b004-24104e40f1ac] Running
E1121 14:35:14.719675   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:14.726148   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:14.737622   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:14.759055   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:14.800519   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:14.882410   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:15.043943   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:15.365497   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:15.442032   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/default-k8s-diff-port-376255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:16.007097   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:17.289138   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:18.472265   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/old-k8s-version-012258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:35:19.850507   14523 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/no-preload-921956/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004018981s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-459127 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-459127 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-088626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-088626
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-459127 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-459127" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:28:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-187733
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:25:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-371956
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:26:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-797080
contexts:
- context:
cluster: NoKubernetes-187733
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:28:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-187733
name: NoKubernetes-187733
- context:
cluster: cert-expiration-371956
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:25:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-371956
name: cert-expiration-371956
- context:
cluster: kubernetes-upgrade-797080
user: kubernetes-upgrade-797080
name: kubernetes-upgrade-797080
current-context: ""
kind: Config
users:
- name: NoKubernetes-187733
user:
client-certificate: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/NoKubernetes-187733/client.crt
client-key: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/NoKubernetes-187733/client.key
- name: cert-expiration-371956
user:
client-certificate: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/cert-expiration-371956/client.crt
client-key: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/cert-expiration-371956/client.key
- name: kubernetes-upgrade-797080
user:
client-certificate: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/kubernetes-upgrade-797080/client.crt
client-key: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/kubernetes-upgrade-797080/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-459127

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459127"

                                                
                                                
----------------------- debugLogs end: kubenet-459127 [took: 3.249065664s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-459127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-459127
--- SKIP: TestNetworkPlugins/group/kubenet (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-459127 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-459127" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:25:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-371956
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11004/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:26:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-797080
contexts:
- context:
cluster: cert-expiration-371956
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:25:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-371956
name: cert-expiration-371956
- context:
cluster: kubernetes-upgrade-797080
user: kubernetes-upgrade-797080
name: kubernetes-upgrade-797080
current-context: ""
kind: Config
users:
- name: cert-expiration-371956
user:
client-certificate: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/cert-expiration-371956/client.crt
client-key: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/cert-expiration-371956/client.key
- name: kubernetes-upgrade-797080
user:
client-certificate: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/kubernetes-upgrade-797080/client.crt
client-key: /home/jenkins/minikube-integration/21847-11004/.minikube/profiles/kubernetes-upgrade-797080/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-459127

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-459127" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459127"

                                                
                                                
----------------------- debugLogs end: cilium-459127 [took: 4.352129293s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-459127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-459127
--- SKIP: TestNetworkPlugins/group/cilium (4.57s)

                                                
                                    
Copied to clipboard